U.S. Department of Energy (DOE) all webpages (Extended Search)
Kernel Library (MKL) Math Kernel Library (MKL) Description The Intel Math Kernel Library (Intel MKL) contains highly optimized, extensively threaded math routines for science, ...
Adaptive wiener image restoration kernel
Yuan, Ding
2007-06-05
A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Catamount n-Way LightWeight KerneL 1 R&D 100 Entry Catamount n-Way LightWeight KerneL 2 R&D 100 Entry Submitting organization Sandia National Laboratories PO Box 5800 Albuquerque, NM 87185-1319 USA Ron Brightwell Phone: (505) 844-2099 Fax: (505) 845-7442 rbbrigh@sandia.gov AFFIRMATION: I affirm that all information submitted as a part of, or supplemental to, this entry is a fair and accurate representation of this product. _____________________________ Ron Brightwell Joint entry
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
Robotic Intelligence Kernel: Communications
Energy Science and Technology Software Center (OSTI)
2009-09-16
The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.
Robotic Intelligence Kernel: Driver
Energy Science and Technology Software Center (OSTI)
2009-09-16
The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.
Robotic Intelligence Kernel: Visualization
Energy Science and Technology Software Center (OSTI)
2009-09-16
The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.
Robotic Intelligence Kernel: Architecture
Energy Science and Technology Software Center (OSTI)
2009-09-16
The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.
Bruemmer, David J.
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Linux Kernel Error Detection and Correction
Energy Science and Technology Software Center (OSTI)
2007-04-11
EDAC-utils consists fo a library and set of utilities for retrieving statistics from the Linux Kernel Error Detection and Correction (EDAC) drivers.
V-098: Linux Kernel Extended Verification Module Bug Lets Local...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
reported in the Linux Kernel. REFERENCE LINKS: The Linux Kernel Archives Linux Kernel Red Hat Bugzilla - Bug 913266 SecurityTracker Alert ID: 1028196 CVE-2013-0313 IMPACT...
Time Adaptive Conditional Kernel Density Estimation for Wind...
Office of Scientific and Technical Information (OSTI)
Time Adaptive Conditional Kernel Density Estimation for Wind Power Forecasting Citation Details In-Document Search Title: Time Adaptive Conditional Kernel Density Estimation for ...
Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications...
Office of Scientific and Technical Information (OSTI)
Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications Citation Details In-Document Search Title: Linux Kernel Co-Scheduling For Bulk Synchronous Parallel ...
Energy Science and Technology Software Center (OSTI)
2015-09-09
The NCCS Regression Test Harness is a software package that provides a framework to perform regression and acceptance testing on NCCS High Performance Computers. The package is written in Python and has only the dependency of a Subversion repository to store the regression tests.
KITTEN Lightweight Kernel 0.1 Beta
Energy Science and Technology Software Center (OSTI)
2007-12-12
The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten providesmore » unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.« less
TICK: Transparent Incremental Checkpointing at Kernel Level
Energy Science and Technology Software Center (OSTI)
2004-10-25
TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.
Accuracy of Reduced and Extended Thin-Wire Kernels
Burke, G J
2008-11-24
Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.
Fabrication of Uranium Oxycarbide Kernels for HTR Fuel
Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber
2010-10-01
Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.
Wilson Dslash Kernel From Lattice QCD Optimization
Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.
PySKI: THE PYTHON SPARSE KERNEL INTERFACE
U.S. Department of Energy (DOE) all webpages (Extended Search)
This content is based on slides produced by Tom Deakin and Simon which were based on slides by Tim and Simon with help from Ben Gaster (Qualcomm) . Agenda Lectures Exercises An Introduction to OpenCL Logging in and running the Vadd program Understanding Host programs Chaining Vadd kernels together Kernel programs The D = A + B + C problem Writing Kernel Programs Matrix Multiplication Lunch Working with the OpenCL memory model Several ways to Optimize matrix multiplication High Performance OpenCL
U-086:Linux Kernel "/proc//mem" Privilege Escalation Vulnerability
A vulnerability has been discovered in the Linux Kernel, which can be exploited by malicious, local users to gain escalated privileges.
V-169: Linux Kernel "iscsi_add_notunderstood_response()" Buffer...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
has been reported in Linux Kernel. REFERENCE LINKS: Secunia Advisory SA53670 Red Hat Bugzilla - Bug 968036 CVE-2013-2850 IMPACT ASSESSMENT: Medium DISCUSSION: The...
Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism ...
Office of Scientific and Technical Information (OSTI)
Sponsoring Org: SC USDOE - Office of Science (SC) Country of Publication: United States Language: English Subject: operating system noise; operating system interference; kernel ...
U-175: Linux Kernel KVM Memory Slot Management Flaw
A vulnerability was reported in the Linux Kernel. A local user on the guest operating system can cause denial of service conditions on the host operating system.
Transportation Representation | NISAC
U.S. Department of Energy (DOE) all webpages (Extended Search)
NISACTransportation Representation content top Chemical Supply Chain Analysis Posted by Admin on Mar 1, 2012 in | Comments 0 comments Chemical Supply Chain Analysis NISAC has...
U-242: Linux Kernel Netlink SCM_CREDENTIALS Processing Flaw Lets...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
2: Linux Kernel Netlink SCMCREDENTIALS Processing Flaw Lets Local Users Gain Elevated Privileges U-242: Linux Kernel Netlink SCMCREDENTIALS Processing Flaw Lets Local Users Gain...
V-156: Linux Kernel Array Bounds Checking Flaw Lets Local Users...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
6: Linux Kernel Array Bounds Checking Flaw Lets Local Users Gain Elevated Privileges V-156: Linux Kernel Array Bounds Checking Flaw Lets Local Users Gain Elevated Privileges May...
On flame kernel formation and propagation in premixed gases
Eisazadeh-Far, Kian; Metghalchi, Hameed [Northeastern University, Mechanical and Industrial Engineering Department, Boston, MA 02115 (United States); Parsinejad, Farzan [Chevron Oronite Company LLC, Richmond, CA 94801 (United States); Keck, James C. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)
2010-12-15
Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)
Prediction of spark kernel development in constant volume combustion
Lim, M.T.; Anderson, R.W.; Arpaci, V.S.
1987-09-01
Combustion initiation is studied in atmospheric pressure propane-air mixtures in a constant volume bomb with a high speed (10,000 fps) laser schlieren system. The spark current and voltage waveforms are simultaneously recorded for later model input. A phenomenological model for early flame kernel development is presented which accounts for the initial, breakdown generated, spark kernel and its subsequent growth. The kernel growth is initially controlled by the breakdown process and the subsequent electrical power input. A new, spark power induced, mass entrainment term is shown to model this initially rapid volume increase adequately while later growth is mainly dominated by diffusion. Results and model comparisons are presented for the effects of power input, spark energy, and equivalence ratio.
U-056: Linux Kernel HFS Buffer Overflow Lets Local Users Gain...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
6: Linux Kernel HFS Buffer Overflow Lets Local Users Gain Root Privileges U-056: Linux Kernel HFS Buffer Overflow Lets Local Users Gain Root Privileges December 9, 2011 - 8:00am...
U-226: Linux Kernel SFC Driver TCP MSS Option Handling Denial...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
6: Linux Kernel SFC Driver TCP MSS Option Handling Denial of Service Vulnerability U-226: Linux Kernel SFC Driver TCP MSS Option Handling Denial of Service Vulnerability August 2,...
T-583: Linux Kernel OSF Partition Table Buffer Overflow Lets Local Users Obtain Information
A local user can create a storage device with specially crafted OSF partition tables. When the kernel automatically evaluates the partition tables, a buffer overflow may occur and data from kernel heap space may leak to user-space.
U-210: Linux Kernel epoll_ctl() Bug Lets Local Users Deny Service
A vulnerability was reported in the Linux Kernel. A local user can cause denial of service conditions.
T-571: Linux Kernel dns_resolver Key Processing Error Lets Local Users Deny Services
A vulnerability was reported in the Linux Kernel. A local user can cause denial of service conditions.
PERI - Auto-tuning Memory Intensive Kernels for Multicore
Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H
2008-06-24
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.
Results from ORNL Characterization of Nominal 350 ?m NUCO Kernels from the BWXT 59344 batch
Hunn, John D; Kercher, Andrew K; Menchhofer, Paul A; Price, Jeffery R
2005-01-01
This document is a compilation of characterization data obtained on nominal 350 {micro}m natural enrichment uranium oxide/uranium carbide kernels (NUCO) produced by BWXT for the Advanced Gas Reactor Fuel Development and Qualification Program. These kernels were produced as part of a development effort at BWXT to address issues involving forming and heat treatment and were shipped to ORNL for additional characterization and for coating tests. The kernels were identified as G73N-NU-59344. 250 grams were shipped to ORNL. Size, shape, and microstructural analysis was performed. These kernels were preceded by G73B-NU-69300 and G73B-NU-69301, which were kernels produced and delivered to ORNL earlier in the development phase. Characterization of the kernels from G73B-NU-69300 was summarized in ORNL/CF-04/07 'Results from ORNL Characterization of Nominal 350 {micro}m NUCO Kernels from the BWXT 69300 composite'.
Kernel-Correlated Levy Field Driven Forward Rate and Application to Derivative Pricing
Bo Lijun; Wang Yongjin; Yang Xuewei
2013-08-01
We propose a term structure of forward rates driven by a kernel-correlated Levy random field under the HJM framework. The kernel-correlated Levy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure.
T-653: Linux Kernel sigqueueinfo() Process Lets Local Users Send Spoofed Signals
A vulnerability was reported in the Linux Kernel. A local user can send spoofed signals to other processes in certain cases.
U-080: Linux Kernel XFS Heap Overflow May Let Remote Users Execute Arbitrary Code
A vulnerability was reported in the Linux Kernel. A remote user can cause arbitrary code to be executed on the target user's system.
TORCH Computational Reference Kernels - A Testbed for Computer Science Research
Kaiser, Alex; Williams, Samuel Webb; Madduri, Kamesh; Ibrahim, Khaled; Bailey, David H.; Demmel, James W.; Strohmaier, Erich
2010-12-02
For decades, computer scientists have sought guidance on how to evolve architectures, languages, and programming models in order to improve application performance, efficiency, and productivity. Unfortunately, without overarching advice about future directions in these areas, individual guidance is inferred from the existing software/hardware ecosystem, and each discipline often conducts their research independently assuming all other technologies remain fixed. In today's rapidly evolving world of on-chip parallelism, isolated and iterative improvements to performance may miss superior solutions in the same way gradient descent optimization techniques may get stuck in local minima. To combat this, we present TORCH: A Testbed for Optimization ResearCH. These computational reference kernels define the core problems of interest in scientific computing without mandating a specific language, algorithm, programming model, or implementation. To compliment the kernel (problem) definitions, we provide a set of algorithmically-expressed verification tests that can be used to verify a hardware/software co-designed solution produces an acceptable answer. Finally, to provide some illumination as to how researchers have implemented solutions to these problems in the past, we provide a set of reference implementations in C and MATLAB.
T-601: Windows Kernel win32k.sys Lets Local Users Gain Elevated Privileges
Multiple vulnerabilities were reported in the Windows Kernel. A local user can obtain elevated privileges on the target system. A local user can trigger a use-after free or null pointer dereference to execute arbitrary commands on the target system with kernel level privileges.
FABRICATION PROCESS AND PRODUCT QUALITY IMPROVEMENTS IN ADVANCED GAS REACTOR UCO KERNELS
Charles M Barnes
2008-09-01
A major element of the Advanced Gas Reactor (AGR) program is developing fuel fabrication processes to produce high quality uranium-containing kernels, TRISO-coated particles and fuel compacts needed for planned irradiation tests. The goals of the AGR program also include developing the fabrication technology to mass produce this fuel at low cost. Kernels for the first AGR test (“AGR-1) consisted of uranium oxycarbide (UCO) microspheres that werre produced by an internal gelation process followed by high temperature steps tot convert the UO3 + C “green” microspheres to first UO2 + C and then UO2 + UCx. The high temperature steps also densified the kernels. Babcock and Wilcox (B&W) fabricated UCO kernels for the AGR-1 irradiation experiment, which went into the Advance Test Reactor (ATR) at Idaho National Laboratory in December 2006. An evaluation of the kernel process following AGR-1 kernel production led to several recommendations to improve the fabrication process. These recommendations included testing alternative methods of dispersing carbon during broth preparation, evaluating the method of broth mixing, optimizing the broth chemistry, optimizing sintering conditions, and demonstrating fabrication of larger diameter UCO kernels needed for the second AGR irradiation test. Based on these recommendations and requirements, a test program was defined and performed. Certain portions of the test program were performed by Oak Ridge National Laboratory (ORNL), while tests at larger scale were performed by B&W. The tests at B&W have demonstrated improvements in both kernel properties and process operation. Changes in the form of carbon black used and the method of mixing the carbon prior to forming kernels led to improvements in the phase distribution in the sintered kernels, greater consistency in kernel properties, a reduction in forming run time, and simplifications to the forming process. Process parameter variation tests in both forming and sintering steps led
FABRICATION OF URANIUM OXYCARBIDE KERNELS AND COMPACTS FOR HTR FUEL
Dr. Jeffrey A. Phillips; Eric L. Shaber; Scott G. Nagley
2012-10-01
As part of the program to demonstrate tristructural isotropic (TRISO)-coated fuel for the Next Generation Nuclear Plant (NGNP), Advanced Gas Reactor (AGR) fuel is being irradiation tested in the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL). This testing has led to improved kernel fabrication techniques, the formation of TRISO fuel particles, and upgrades to the overcoating, compaction, and heat treatment processes. Combined, these improvements provide a fuel manufacturing process that meets the stringent requirements associated with testing in the AGR experimentation program. Researchers at Idaho National Laboratory (INL) are working in conjunction with a team from Babcock and Wilcox (B&W) and Oak Ridge National Laboratory (ORNL) to (a) improve the quality of uranium oxycarbide (UCO) fuel kernels, (b) deposit TRISO layers to produce a fuel that meets or exceeds the standard developed by German researches in the 1980s, and (c) develop a process to overcoat TRISO particles with the same matrix material, but applies it with water using equipment previously and successfully employed in the pharmaceutical industry. A primary goal of this work is to simplify the process, making it more robust and repeatable while relying less on operator technique than prior overcoating efforts. A secondary goal is to improve first-pass yields to greater than 95% through the use of established technology and equipment. In the first test, called “AGR-1,” graphite compacts containing approximately 300,000 coated particles were irradiated from December 2006 to November 2009. The AGR-1 fuel was designed to closely replicate many of the properties of German TRISO-coated particles, thought to be important for good fuel performance. No release of gaseous fission product, indicative of particle coating failure, was detected in the nearly 3-year irradiation to a peak burn up of 19.6% at a time-average temperature of 1038–1121°C. Before fabricating AGR-2 fuel, each
libMSR library and msr-safe kernel module
Energy Science and Technology Software Center (OSTI)
2013-09-26
Modern processors offer a wide range of control and measurement features. While those are traditionally accessed through libraries like PAPI, some newer features no longer follow the traditional model of counters that can be used to only read the state of the processor. For example, Precise Event Based Sampling (PEBS) can generate records that requires a kernel memory for storage. Additionally, new features like power capping and thermal control require similar new access methods. Allmore » of these features are ultimately controlled through Model Specific Registers (MSRs). We therefore need new mechanisms to make such features available to tools and ultimately to the user. libMSR provides a convenient interface to access MSRs and to allow tools to utilize their full functionality.« less
AGR-5/6/7 LEUCO Kernel Fabrication Readiness Review
Marshall, Douglas W.; Bailey, Kirk W.
2015-02-01
In preparation for forming low-enriched uranium carbide/oxide (LEUCO) fuel kernels for the Advanced Gas Reactor (AGR) fuel development and qualification program, Idaho National Laboratory conducted an operational readiness review of the Babcock & Wilcox Nuclear Operations Group – Lynchburg (B&W NOG-L) procedures, processes, and equipment from January 14 – January 16, 2015. The readiness review focused on requirements taken from the American Society Mechanical Engineers (ASME) Nuclear Quality Assurance Standard (NQA-1-2008, 1a-2009), a recent occurrence at the B&W NOG-L facility related to preparation of acid-deficient uranyl nitrate solution (ADUN), and a relook at concerns noted in a previous review. Topic areas open for the review were communicated to B&W NOG-L in advance of the on-site visit to facilitate the collection of objective evidences attesting to the state of readiness.
Dynamic extension of the Simulation Problem Analysis Kernel (SPANK)
Sowell, E.F. . Dept. of Computer Science); Buhl, W.F. )
1988-07-15
The Simulation Problem Analysis Kernel (SPANK) is an object-oriented simulation environment for general simulation purposes. Among its unique features is use of the directed graph as the primary data structure, rather than the matrix. This allows straightforward use of graph algorithms for matching variables and equations, and reducing the problem graph for efficient numerical solution. The original prototype implementation demonstrated the principles for systems of algebraic equations, allowing simulation of steady-state, nonlinear systems (Sowell 1986). This paper describes how the same principles can be extended to include dynamic objects, allowing simulation of general dynamic systems. The theory is developed and an implementation is described. An example is taken from the field of building energy system simulation. 2 refs., 9 figs.
SAR Image Complex Pixel Representations
Doerry, Armin W.
2015-03-01
Complex pixel values for Synthetic Aperture Radar (SAR) images of uniform distributed clutter can be represented as either real/imaginary (also known as I/Q) values, or as Magnitude/Phase values. Generally, these component values are integers with limited number of bits. For clutter energy well below full-scale, Magnitude/Phase offers lower quantization noise than I/Q representation. Further improvement can be had with companding of the Magnitude value.
Temporal Representation in Semantic Graphs
Levandoski, J J; Abdulla, G M
2007-08-07
A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.
Representation of Limited Rights Data and Restricted Computer...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Representation of Limited Rights Data and Restricted Computer Software Representation of Limited Rights Data and Restricted Computer Software Representation of Limited Rights Data ...
U-068:Linux Kernel SG_IO ioctl Bug Lets Local Users Gain Elevated...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Linux Kernel SGIO ioctl Bug Lets Local Users Gain Elevated Privileges PLATFORM: Red Hat Enterprise Linux Desktop (v. 6) Red Hat Enterprise Linux HPC Node (v. 6) Red Hat...
STORM: A STatistical Object Representation Model
Rafanelli, M. ); Shoshani, A. )
1989-11-01
In this paper we explore the structure and semantic properties of the entities stored in statistical databases. We call such entities statistical objects'' (SOs) and propose a new statistical object representation model,'' based on a graph representation. We identify a number of SO representational problems in current models and propose a methodology for their solution. 11 refs.
PART IV REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
K, Page i PART IV - REPRESENTATIONS AND INSTRUCTIONS SECTION K REPRESENTATIONS, CERTIFICATIONS, AND OTHER STATEMENTS OF OFFERORS K-1 FAR 52.204-8 ANNUAL REPRESENTATIONS AND CERTIFICATIONS (DEC 2014) .................. 131 K-2 FAR 52.204-16 COMMERCIAL AND GOVERNMENT ENTITY CODE REPORTING (JUL 2015) ...................................................................................................................................................................... 135 K-3 FAR 52.209-7 INFORMATION
PART IV REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
K, Page i PART IV - REPRESENTATIONS AND INSTRUCTIONS SECTION K REPRESENTATIONS, CERTIFICATIONS, AND OTHER STATEMENTS OF OFFERORS K-1 FAR 52.204-8 ANNUAL REPRESENTATIONS AND CERTIFICATIONS (DEC 2014) .................. 131 K-2 FAR 52.204-16 COMMERCIAL AND GOVERNMENT ENTITY CODE REPORTING (JUL 2015) ...................................................................................................................................................................... 135 K-3 FAR 52.209-7 INFORMATION
Spark ignited turbulent flame kernel growth. Annual report, January--December, 1992
Santavicca, D.A.
1994-06-01
Cyclic combustion variations in spark-ignition engines limit the use of dilute charge strategies for achieving low NO{sub x} emissions and improved fuel economy. Results from an experimental study of the effect of incomplete fuel-air mixing (ifam) on spark-ignited flame kernel growth in turbulent propane-air mixtures are presented. The experiments were conducted in a turbulent flow system that allows for independent variation of flow parameters, ignition system parameters, and the degree of fuel-air mixing. Measurements were made at 1 atm and 300 K conditions. Five cases were studied; a premixed and four incompletely mixed cases with 6%, 13%, 24% and 33% RMS (root-mean-square) fluctuations in the fuel/air equivalence ratio. High speed laser shadowgraphy at 4,000 frames-per-second was used to record flame kernel growth following spark ignition, from which the equivalent flame kernel radius as a function of time was determined. The effect of ifam was evaluated in terms of the flame kernel growth rate, cyclic variations in the flame kernel growth, and the rate of misfire. The results show that fluctuations in local mixture strength due to ifam cause the flame kernel surface to become wrinkled and distorted; and that the amount of wrinkling increases as the degree of ifam. Ifam was also found to result in a significant increase in cyclic variations in the flame kernel growth. The average flame kernel growth rates for the premixed and the incompletely mixed cases were found to be within the experimental uncertainty except for the 33%-RMS-fluctuation case where the growth rate is significantly lower. The premixed and 6%-RMS-fluctuation cases had a 0% misfire rate. The misfire rates were 1% and 2% for the 13%-RMS-fluctuation and 24%-RMS-fluctuation cases, respectively; however, it drastically increased to 23% in the 33%-RMS-fluctuation case.
The architecture of a plug-and-play kernel for oilfield software applications
Ward, V.L.; Seaton, C.P.
1996-12-01
It is now common practice for engineers to use PC software to design and evaluate oilfield services. Rapidly changing technology in PC software has made it necessary for organizations to release new applications quickly to remain competitive. The authors designed a plug-and-play kernel for the computer aided design and evaluation (CADE) applications to reduce development time and time to market. The paper discusses the kernel used in the CADE software in detail.
Alternative Approach to Nuclear Data Representation
Pruet, J; Brown, D; Beck, B; McNabb, D P
2005-07-27
This paper considers an approach for representing nuclear data that is qualitatively different from the approach currently adopted by the nuclear science community. Specifically, they examine a representation in which complicated data is described through collections of distinct and self contained simple data structures. This structure-based representation is compared with the ENDF and ENDL formats, which can be roughly characterized as dictionary-based representations. A pilot data representation for replacing the format currently used at LLNL is presented. Examples are given as is a discussion of promises and shortcomings associated with moving from traditional dictionary-based formats to a structure-rich or class-like representation.
Code System to Calculate Correlation & Regression Coefficients.
Energy Science and Technology Software Center (OSTI)
1999-11-23
Version 00 PCC/SRC is designed for use in conjunction with sensitivity analyses of complex computer models. PCC/SRC calculates the partial correlation coefficients (PCC) and the standardized regression coefficients (SRC) from the multivariate input to, and output from, a computer model.
SEP Request for Approval Form 3 - Other Complex Regression Model...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
3 - Other Complex Regression Model Rationale SEP Request for Approval Form 3 - Other Complex Regression Model Rationale SEP-Request-for-Approval-Form-3Other-Complex-Regression-Mod...
STELLAR LOCUS REGRESSION: ACCURATE COLOR CALIBRATION AND THE...
Office of Scientific and Technical Information (OSTI)
REGRESSION: ACCURATE COLOR CALIBRATION AND THE REAL-TIME DETERMINATION OF GALAXY CLUSTER PHOTOMETRIC REDSHIFTS Citation Details In-Document Search Title: STELLAR LOCUS REGRESSION: ...
Luttman, A.
2012-10-08
This slide-show discusses the use of the Local Polynomial Approximation (LPA) to smooth signals from photonic Doppler velocimetry (PDV) applying a generalized Peano kernel theorem.
Part II - Managerial Competencies: Organizational Representation...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Part II - Managerial Competencies: Organizational Representation and Liaison Form for the SES program emphasizes the range of communications and publicrelations aspects of ...
Lindemer, Terrence; Voit, Stewart L; Silva, Chinthaka M; Besmann, Theodore M; Hunt, Rodney Dale
2014-01-01
The U.S. Department of Energy is considering a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with large, dense uranium nitride (UN) kernels. This effort explores many factors involved in using gel-derived uranium oxide-carbon microspheres to make large UN kernels. Analysis of recent studies with sufficient experimental details is provided. Extensive thermodynamic calculations are used to predict carbon monoxide and other pressures for several different reactions that may be involved in conversion of uranium oxides and carbides to UN. Experimentally, the method for making the gel-derived microspheres is described. These were used in a microbalance with an attached mass spectrometer to determine details of carbothermic conversion in argon, nitrogen, or vacuum. A quantitative model is derived from experiments for vacuum conversion to an uranium oxide-carbide kernel.
Problematic projection to the in-sample subspace for a kernelized anomaly detector
Theiler, James; Grosklos, Guen
2016-03-07
We examine the properties and performance of kernelized anomaly detectors, with an emphasis on the Mahalanobis-distance-based kernel RX (KRX) algorithm. Although the detector generally performs well for high-bandwidth Gaussian kernels, it exhibits problematic (in some cases, catastrophic) performance for distances that are large compared to the bandwidth. By comparing KRX to two other anomaly detectors, we can trace the problem to a projection in feature space, which arises when a pseudoinverse is used on the covariance matrix in that feature space. Here, we show that a regularized variant of KRX overcomes this difficulty and achieves superior performance over a widemore » range of bandwidths.« less
Representable states on quasilocal quasi *-algebras
Bagarello, F.; Trapani, C.; Triolo, S.
2011-01-15
Continuing a previous analysis originally motivated by physics, we consider representable states on quasilocal quasi *-algebras, starting with examining the possibility for a compatible family of local states to give rise to a global state. Some properties of local modifications of representable states and some aspects of their asymptotic behavior are also considered.
Predictive based monitoring of nuclear plant component degradation using support vector regression
Agarwal, Vivek; Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.
2015-02-01
Nuclear power plants (NPPs) are large installations comprised of many active and passive assets. Degradation monitoring of all these assets is expensive (labor cost) and highly demanding task. In this paper a framework based on Support Vector Regression (SVR) for online surveillance of critical parameter degradation of NPP components is proposed. In this case, on time replacement or maintenance of components will prevent potential plant malfunctions, and reduce the overall operational cost. In the current work, we apply SVR equipped with a Gaussian kernel function to monitor components. Monitoring includes the one-step-ahead prediction of the component’s respective operational quantity using the SVR model, while the SVR model is trained using a set of previous recorded degradation histories of similar components. Predictive capability of the model is evaluated upon arrival of a sensor measurement, which is compared to the component failure threshold. A maintenance decision is based on a fuzzy inference system that utilizes three parameters: (i) prediction evaluation in the previous steps, (ii) predicted value of the current step, (iii) and difference of current predicted value with components failure thresholds. The proposed framework will be tested on turbine blade degradation data.
Mental Representations Formed From Educational Website Formats
Elizabeth T. Cady; Kimberly R. Raddatz; Tuan Q. Tran; Bernardo de la Garza; Peter D. Elgin
2006-10-01
The increasing popularity of web-based distance education places high demand on distance educators to format web pages to facilitate learning. However, limited guidelines exist regarding appropriate writing styles for web-based distance education. This study investigated the effect of four different writing styles on readers mental representation of hypertext. Participants studied hypertext written in one of four web-writing styles (e.g., concise, scannable, objective, and combined) and were then administered a cued association task intended to measure their mental representations of the hypertext. It is hypothesized that the scannable and combined styles will bias readers to scan rather than elaborately read, which may result in less dense mental representations (as identified through Pathfinder analysis) relative to the objective and concise writing styles. Further, the use of more descriptors in the objective writing style will lead to better integration of ideas and more dense mental representations than the concise writing style.
PT-symmetric representations of fermionic algebras
Bender, Carl M.; Klevansky, S. P.
2011-08-15
A recent paper by Jones-Smith and Mathur, Phys. Rev. A 82, 042101 (2010) extends PT-symmetric quantum mechanics from bosonic systems (systems for which T{sup 2}=1) to fermionic systems (systems for which T{sup 2}=-1). The current paper shows how the formalism developed by Jones-Smith and Mathur can be used to construct PT-symmetric matrix representations for operator algebras of the form {eta}{sup 2}=0, {eta}{sup 2}=0, {eta}{eta}+{eta}{eta}={alpha}1, where {eta}={eta}{sup PT}=PT{eta}T{sup -1}P{sup -1}. It is easy to construct matrix representations for the Grassmann algebra ({alpha}=0). However, one can only construct matrix representations for the fermionic operator algebra ({alpha}{ne}0) if {alpha}=-1; a matrix representation does not exist for the conventional value {alpha}=1.
Part II - Managerial Competencies: Organizational Representation and
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Liaison | Department of Energy Part II - Managerial Competencies: Organizational Representation and Liaison Part II - Managerial Competencies: Organizational Representation and Liaison Form for the SES program emphasizes the range of communications and public relations aspects of executive positions as found in official correspondence and documentation, as well as, formal and informal verbal communications, and it describes the major competencies within this activity Part II - Managerial
Reiner, Dora; Blaickner, Matthias; Rattay, Frank
2009-11-15
Purpose: Radiopharmaceuticals administered in targeted radionuclide therapy (TRT) rely to a great extent not only on beta-emitting nuclides but also on emitters of monoenergetic electrons. Recent advances like combined PET/CT devices, the consequential coregistration of both data, the concept of using beta couples for diagnosis and therapy, respectively, as well as the development of voxel models offer a great potential for developing TRT dose calculation systems similar to those available for external beam treatment planning. The deterministic algorithms in question for this task are based on the convolution of three-dimensional matrices, one representing the activity distribution and the other the dose point kernel. This study aims to report on three-dimensional kernel matrices for various nuclides used in TRT. Methods: The Monte Carlo code MCNP5 was used to calculate discrete dose kernels of beta particles including the contributions from their respective secondary radiation in soft tissue for the following nuclides: {sup 32}P, {sup 33}P, {sup 67}Cu, {sup 89}Sr, {sup 90}Y, {sup 103}Rh{sup m}, {sup 131}I, {sup 177}Lu, {sup 186}Re, and {sup 188}Re. For each nuclide a kernel cube of 10x10x10 mm{sup 3} was calculated, the dimensions of a voxel being 1 mm{sup 3}. Additional kernels with voxel sizes of 3x3x3 mm{sup 3} were simulated. Results: Comparison with the S-value data regarding {sup 32}P, {sup 89}Sr, {sup 90}Y, and {sup 131}I of the MIRD committee which were calculated with the EGS4 code showed a very good agreement, the secondary particle transport of {sup 90}Y being the only exception. Documented analytical kernels on the other side show deviations very close and very far to the source. Conclusions: The good accordance with the only discrete dose kernels published up to date justifies the method chosen. Together with the additional six nuclides, this report provides a considerable database for three-dimensional kernel matrices with regard to beta
Verification and large deformation analysis using the reproducing kernel particle method
Beckwith, Frank
2015-09-01
The reproducing kernel particle method (RKPM) is a meshless method used to solve general boundary value problems using the principle of virtual work. RKPM corrects the kernel approximation by introducing reproducing conditions which force the method to be complete to arbritrary order polynomials selected by the user. Effort in recent years has led to the implementation of RKPM within the Sierra/SM physics software framework. The purpose of this report is to investigate convergence of RKPM for verification and validation purposes as well as to demonstrate the large deformation capability of RKPM in problems where the finite element method is known to experience difficulty. Results from analyses using RKPM are compared against finite element analysis. A host of issues associated with RKPM are identified and a number of potential improvements are discussed for future work.
Azcona, J; Burguete, J
2014-06-01
Purpose: To obtain the pencil beam kernels that characterize a megavoltage photon beam generated in a FFF linac by experimental measurements, and to apply them for dose calculation in modulated fields. Methods: Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from a Varian True Beam (Varian Medical Systems, Palo Alto, CA) linac, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50 mm diameter circular field, collimated with a lead block. Measured dose leads to the kernel characterization, assuming that the energy fluence exiting the linac head and further collimated is originated on a point source. The three-dimensional kernel was obtained by deconvolution at each depth using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. The kernels were used to calculate modulated dose distributions in six modulated fields and compared through the gamma index to their absolute dose measured by film in the RW3 phantom. Results: The resulting kernels properly characterize the global beam penumbra. The output factor-based correction was carried out adding the amount of signal necessary to reproduce the experimental output factor in steps of 2mm, starting at a radius of 4mm. There the kernel signal was in all cases below 10% of its maximum value. With this correction, the number of points that pass the gamma index criteria (3%, 3mm) in the modulated fields for all cases are at least 99.6% of the total number of points. Conclusion: A system for independent dose calculations in modulated fields from FFF beams has been developed. Pencil beam kernels were obtained and their ability to accurately calculate dose in homogeneous media was demonstrated.
Data summary for nominal 500 ?m DUO_{2} Kernels
Hunn, John D
2004-04-01
This document is a compilation of characterization data obtained on the nominal 500 {micro}m DUO{sub 2} kernels produced by ORNL for the Advanced Gas Reactor Fuel Development and Qualification Program to satisfy the FY03 WBS 3.1.2 task milestone No.2.2 kg of kernels were produced and combined in two composite lots. DUN-500 was a 1630 g composite sieved between 500 {+-} 2 {micro}m and 534 {+-} 2 {micro}m ASTM E161 electroformed sieves. DUN-482 was a 385.6 g composite sieved between 482 {+-} 2 {micro}m and 518 {+-} 2 {micro}m ASTM E161 electroformed sieves. Size, shape, density, and microstructural analysis were performed on a 100 g sublot (DUN-500-S-1) riffled from the DUN-500 composite. Size and shape were also measured on a 100 g sublot (DUN-482-S-1) riffled from the DUN-482 composite. For comparison, analysis was also performed on kernels extracted from the German reference fuel EUO 2358-2365 (AGR-06).
Petersen, Jakob; Pollak, Eli
2015-12-14
One of the challenges facing on-the-fly ab initio semiclassical time evolution is the large expense needed to converge the computation. In this paper, we suggest that a significant saving in computational effort may be achieved by employing a semiclassical initial value representation (SCIVR) of the quantum propagator based on the Heisenberg interaction representation. We formulate and test numerically a modification and simplification of the previous semiclassical interaction representation of Shao and Makri [J. Chem. Phys. 113, 3681 (2000)]. The formulation is based on the wavefunction form of the semiclassical propagation instead of the operator form, and so is simpler and cheaper to implement. The semiclassical interaction representation has the advantage that the phase and prefactor vary relatively slowly as compared to the “standard” SCIVR methods. This improves its convergence properties significantly. Using a one-dimensional model system, the approximation is compared with Herman-Kluk’s frozen Gaussian and Heller’s thawed Gaussian approximations. The convergence properties of the interaction representation approach are shown to be favorable and indicate that the interaction representation is a viable way of incorporating on-the-fly force field information within a semiclassical framework.
TURBULENCE-INDUCED RELATIVE VELOCITY OF DUST PARTICLES. IV. THE COLLISION KERNEL
Pan, Liubin; Padoan, Paolo E-mail: ppadoan@icc.ub.edu
2014-12-20
Motivated by its importance for modeling dust particle growth in protoplanetary disks, we study turbulence-induced collision statistics of inertial particles as a function of the particle friction time, ?{sub p}. We show that turbulent clustering significantly enhances the collision rate for particles of similar sizes with ?{sub p} corresponding to the inertial range of the flow. If the friction time, ?{sub p,} {sub h}, of the larger particle is in the inertial range, the collision kernel per unit cross section increases with increasing friction time, ?{sub p,} {sub l}, of the smaller particle and reaches the maximum at ?{sub p,} {sub l} = ?{sub p,} {sub h}, where the clustering effect peaks. This feature is not captured by the commonly used kernel formula, which neglects the effect of clustering. We argue that turbulent clustering helps alleviate the bouncing barrier problem for planetesimal formation. We also investigate the collision velocity statistics using a collision-rate weighting factor to account for higher collision frequency for particle pairs with larger relative velocity. For ?{sub p,} {sub h} in the inertial range, the rms relative velocity with collision-rate weighting is found to be invariant with ?{sub p,} {sub l} and scales with ?{sub p,} {sub h} roughly as ? ?{sub p,h}{sup 1/2}. The weighting factor favors collisions with larger relative velocity, and including it leads to more destructive and less sticking collisions. We compare two collision kernel formulations based on spherical and cylindrical geometries. The two formulations give consistent results for the collision rate and the collision-rate weighted statistics, except that the spherical formulation predicts more head-on collisions than the cylindrical formulation.
Computing traveltime and amplitude sensitivity kernels in finite-frequency tomography
Tian Yue Montelli, Raffaella; Nolet, Guust; Dahlen, F.A.
2007-10-01
The efficient computation of finite-frequency traveltime and amplitude sensitivity kernels for velocity and attenuation perturbations in global seismic tomography poses problems both of numerical precision and of validity of the paraxial approximation used. We investigate these aspects, using a local model parameterization in the form of a tetrahedral grid with linear interpolation in between grid nodes. The matrix coefficients of the linear inverse problem involve a volume integral of the product of the finite-frequency kernel with the basis functions that represent the linear interpolation. We use local and global tests as well as analytical expressions to test the numerical precision of the frequency and spatial quadrature. There is a trade-off between narrowing the bandpass filter and quadrature accuracy and efficiency. Using a minimum step size of 10 km for S waves and 30 km for SS waves, relative errors in the quadrature are of the order of 1% for direct waves such as S, and a few percent for SS waves, which are below data uncertainties in delay time or amplitude anomaly observations in global seismology. Larger errors may occur wherever the sensitivity extends over a large volume and the paraxial approximation breaks down at large distance from the ray. This is especially noticeable for minimax phases such as SS waves with periods >20 s, when kernels become hyperbolic near the reflection point and appreciable sensitivity extends over thousands of km. Errors becomes intolerable at epicentral distance near the antipode when sensitivity extends over all azimuths in the mantle. Effects of such errors may become noticeable at epicentral distances > 140{sup o}. We conclude that the paraxial approximation offers an efficient method for computing the matrix system for finite-frequency inversions in global tomography, though care should be taken near reflection points, and alternative methods are needed to compute sensitivity near the antipode.
Representations of some quantum tori Lie subalgebras
Jiang, Jingjing; Wang, Song
2013-03-15
In this paper, we define the q-analog Virasoro-like Lie subalgebras in x{sub {infinity}}=a{sub {infinity}}(b{sub {infinity}}, c{sub {infinity}}, d{sub {infinity}}). The embedding formulas into x{sub {infinity}} are introduced. Irreducible highest weight representations of A(tilde sign){sub q}, B(tilde sign){sub q}, and C(tilde sign){sub q}-series of the q-analog Virasoro-like Lie algebras in terms of vertex operators are constructed. We also construct the polynomial representations of the A(tilde sign){sub q}, B(tilde sign){sub q}, C(tilde sign){sub q}, and D(tilde sign){sub q}-series of the q-analog Virasoro-like Lie algebras.
Group representations, error bases and quantum codes
Knill, E
1996-01-01
This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.
Robust regression on noisy data for fusion scaling laws
Verdoolaege, Geert
2014-11-15
We introduce the method of geodesic least squares (GLS) regression for estimating fusion scaling laws. Based on straightforward principles, the method is easily implemented, yet it clearly outperforms established regression techniques, particularly in cases of significant uncertainty on both the response and predictor variables. We apply GLS for estimating the scaling of the L-H power threshold, resulting in estimates for ITER that are somewhat higher than predicted earlier.
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation
Huang, Hao; Yoo, Shinjae; Yu, Dantong; Qin, Hong
2015-06-01
Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, so it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.
Liu, Derek Sloboda, Ron S.
2014-05-15
Purpose: Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. Methods: The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Results: Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. Conclusions: A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.
Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation
Huang, Hao; Yoo, Shinjae; Yu, Dantong; Qin, Hong
2015-06-01
Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less
Shell Element Verification & Regression Problems for DYNA3D
Zywicz, E
2008-02-01
A series of quasi-static regression/verification problems were developed for the triangular and quadrilateral shell element formulations contained in Lawrence Livermore National Laboratory's explicit finite element program DYNA3D. Each regression problem imposes both displacement- and force-type boundary conditions to probe the five independent nodal degrees of freedom employed in the targeted formulation. When applicable, the finite element results are compared with small-strain linear-elastic closed-form reference solutions to verify select aspects of the formulations implementation. Although all problems in the suite depict the same geometry, material behavior, and loading conditions, each problem represents a unique combination of shell formulation, stabilization method, and integration rule. Collectively, the thirty-six new regression problems in the test suite cover nine different shell formulations, three hourglass stabilization methods, and three families of through-thickness integration rules.
T-567: Linux Kernel Buffer Overflow in ldm_frag_add() May Let Local Users Gain Elevated Privileges
A vulnerability was reported in the Linux Kernel. A local user may be able to obtain elevated privileges on the target system. A physically local user can connect a storage device with a specially crafted LDM partition table to trigger a buffer overflow in the ldm_frag_add() function in 'fs/partitions/ldm.c' and potentially execute arbitrary code with elevated privileges.
Geometric representation of fundamental particles' inertial mass
Schachter, L.; Spencer, James
2015-07-22
A geometric representation of the (N = 279) masses of quarks, leptons, hadrons and gauge bosons was introduced by employing a Riemann Sphere facilitating the interpretation of the N masses in terms of a single particle, the Masson, which might be in one of the N eigen-states. Geometrically, its mass is the radius of the Riemann Sphere. Dynamically, its derived mass is near the mass of the nucleon regardless of whether it is determined from all N particles of only the hadrons, the mesons or the baryons separately. Ignoring all the other properties of these particles, it is shown that the eigen-values, the polar representation θ_{ν} of the masses on the Sphere, satisfy the symmetry θ_{ν} + θ_{N+1-ν} = π within less than 1% relative error. In addition, these pair correlations include the pairs θ_{γ} + θ_{top} ≃ π and θ_{gluon} + θ_{H} ≃ π as well as pairing the weak gauge bosons with the three neutrinos.
A kernel-oriented model for coalition-formation in general environments: Implementation and results
Shehory, O.; Kraus, S.
1996-12-31
In this paper we present a model for coalition formation and payoff distribution in general environments. We focus on a reduced complexity kernel-oriented coalition formation model, and provide a detailed algorithm for the activity of the single rational agent. The model is partitioned into a social level and a strategic level, to distinguish between regulations that must be agreed upon and are forced by agent-designers, and strategies by which each agent acts at will. In addition, we present an implementation of the model and simulation results. From these we conclude that implementing the model for coalition formation among agents increases the benefits of the agents with reasonable time consumption. It also shows that more coalition formations yield more benefits to the agents.
Three-dimensional photodissociation in strong laser fields: Memory-kernel effective-mode expansion
Li Xuan; Thanopulos, Ioannis; Shapiro, Moshe
2011-03-15
We introduce a method for the efficient computation of non-Markovian quantum dynamics for strong (and time-dependent) system-bath interactions. The past history of the system dynamics is incorporated by expanding the memory kernel in exponential functions thereby transforming in an exact fashion the non-Markovian integrodifferential equations into a (larger) set of ''effective modes'' differential equations (EMDE). We have devised a method which easily diagonalizes the EMDE, thereby allowing for the efficient construction of an adiabatic basis and the fast propagation of the EMDE in time. We have applied this method to three-dimensional photodissociation of the H{sub 2}{sup +} molecule by strong laser fields. Our calculations properly include resonance-Raman scattering via the continuum, resulting in extensive rotational and vibrational excitations. The calculated final kinetic and angular distribution of the photofragments are in overall excellent agreement with experiments, both when transform-limited pulses and when chirped pulses are used.
DYNA3D/ParaDyn Regression Test Suite Inventory
Lin, J I
2011-01-25
The following table constitutes an initial assessment of feature coverage across the regression test suite used for DYNA3D and ParaDyn. It documents the regression test suite at the time of production release 10.1 in September 2010. The columns of the table represent groupings of functionalities, e.g., material models. Each problem in the test suite is represented by a row in the table. All features exercised by the problem are denoted by a check mark in the corresponding column. The definition of ''feature'' has not been subdivided to its smallest unit of user input, e.g., algorithmic parameters specific to a particular type of contact surface. This represents a judgment to provide code developers and users a reasonable impression of feature coverage without expanding the width of the table by several multiples. All regression testing is run in parallel, typically with eight processors. Many are strictly regression tests acting as a check that the codes continue to produce adequately repeatable results as development unfolds, compilers change and platforms are replaced. A subset of the tests represents true verification problems that have been checked against analytical or other benchmark solutions. Users are welcomed to submit documented problems for inclusion in the test suite, especially if they are heavily exercising, and dependent upon, features that are currently underrepresented.
Representation of Limited Rights Data and Restricted Computer Software |
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Department of Energy Representation of Limited Rights Data and Restricted Computer Software Representation of Limited Rights Data and Restricted Computer Software Representation of Limited Rights Data and Restricted Computer Software (44.02 KB) More Documents & Publications CLB-1003.PDF� Intellectual Property Provisions (CSB-1003) Cooperative Agreement Research, Development, or Demonstration Domestic Small Businesses CDLB-1003.PDF�
From Rays to Structures: Representation and Selection of Void...
U.S. Department of Energy (DOE) all webpages (Extended Search)
From Rays to Structures: Representation and Selection of Void Structures in Zeolites using Stochastic Methods Previous Next List Andrew J. Jones, Christopher Ostrouchov, Maciej...
Representation of Limited Rights Data and Restricted Computer...
...r... Representation of Limited Rights Data and Restricted Computer Software (a) Any data delivered under an award resulting from this announcement is ...
A representation formula for maps on supermanifolds
Helein, Frederic [Institut de Mathematiques de Jussieu, UMR 7586, Universite Denis Diderot-Paris 7, Case 7012, 2 place Jussieu, 75251 Paris Cedex 5 (France)
2008-02-15
We analyze the notion of morphisms of rings of superfunctions which is the basic concept underlying the definition of supermanifolds as ringed spaces (i.e., following Berezin, Leites, Manin, etc.). We establish a representation formula for all (pull-back) morphisms from the algebra of functions on an ordinary manifolds to the superalgebra of functions on an open subset of a superspace. We then derive two consequences of this result. The first one is that we can integrate the data associated with a morphism in order to get a (nonunique) map defined on an ordinary space (and uniqueness can be achieved by restriction to a scheme). The second one is a simple and intuitive recipe to compute pull-back images of a function on a manifold M by a map from a superspace to M.
Collins, J.L.
2004-12-02
The main objective of the Depleted UO{sub 2} Kernels Production Task at Oak Ridge National Laboratory (ORNL) was to conduct two small-scale production campaigns to produce 2 kg of UO{sub 2} kernels with diameters of 500 {+-} 20 {micro}m and 3.5 kg of UO{sub 2} kernels with diameters of 350 {+-} 10 {micro}m for the U.S. Department of Energy Advanced Fuel Cycle Initiative Program. The final acceptance requirements for the UO{sub 2} kernels are provided in the first section of this report. The kernels were prepared for use by the ORNL Metals and Ceramics Division in a development study to perfect the triisotropic (TRISO) coating process. It was important that the kernels be strong and near theoretical density, with excellent sphericity, minimal surface roughness, and no cracking. This report gives a detailed description of the production efforts and results as well as an in-depth description of the internal gelation process and its chemistry. It describes the laboratory-scale gel-forming apparatus, optimum broth formulation and operating conditions, preparation of the acid-deficient uranyl nitrate stock solution, the system used to provide uniform broth droplet formation and control, and the process of calcining and sintering UO{sub 3} {center_dot} 2H{sub 2}O microspheres to form dense UO{sub 2} kernels. The report also describes improvements and best past practices for uranium kernel formation via the internal gelation process, which utilizes hexamethylenetetramine and urea. Improvements were made in broth formulation and broth droplet formation and control that made it possible in many of the runs in the campaign to produce the desired 350 {+-} 10-{micro}m-diameter kernels, and to obtain very high yields.
Latest Jurassic-early Cretaceous regressive facies, northeast Africa craton
van Houten, F.B.
1980-06-01
Nonmarine to paralic detrital deposits accumulated in six large basins between Algeria and the Arabo-Nubian shield during major regression in latest Jurassic and Early Cretaceous time. The Ghadames Sirte (north-central Libya), and Northern (Egypt) basins lay along the cratonic margin of northeastern Africa. The Murzuk, Kufra, and Southern (Egypt) basins lay in the south within the craton. Data for reconstructing distribution, facies, and thickness of relevant sequences are adequate for the three northern basins only. High detrital influx near the end of Jurassic time and in mid-Cretaceous time produced regressive nubian facies composed largely of low-sinuosity stream and fahdelta deposits. In the west and southwest the Ghadames, Murzuk, and Kufra basins were filled with a few hundred meters of detritus after long-continued earlier Mesozoic aggradation. In northern Egypt the regressive sequence succeeded earlier Mesozoic marine sedimentation; in the Sirte and Southern basins correlative deposits accumulated on Precambrian and Variscan terranes after earlier Mesozoic uplift and erosion. Waning of detrital influx into southern Tunisia and adjacent Libya in the west and into Israel in the east initiated an Albian to early Cenomanian transgression of Tethys. By late Cenomanian time it had flooded the entire cratonic margin, and spread southward into the Murzuk and Southern basins, as well as onto the Arabo-Nubian shield. Latest Jurassic-earliest Cretaceous, mid-Cretaceous, and Late Cretaceous transgressions across northeastern Africa recorded in these sequences may reflect worldwide eustatic sea-level rises. In contrast, renewed large supply of detritus during each regression and a comparable subsidence history of intracratonic and marginal basins imply regional tectonic control. 6 figures.
On the representation of many-body interactions in water
Medders, Gregory R.; Gotz, Andreas W.; Morales, Miguel A.; Bajaj, Pushp; Paesani, Francesco
2015-09-09
Our recent work has shown that the many-body expansion of the interactionenergy can be used to develop analytical representations of global potential energy surfaces (PESs) for water. In this study, the role of short- and long-range interactions at different orders is investigated by analyzing water potentials that treat the leading terms of the many-body expansion through implicit (i.e., TTM3-F and TTM4-F PESs) and explicit (i.e., WHBB and MB-pol PESs) representations. Moreover, it is found that explicit short-range representations of 2-body and 3-body interactions along with a physically correct incorporation of short- and long-range contributions are necessary for an accurate representation of the waterinteractions from the gas to the condensed phase. Likewise, a complete many-body representation of the dipole moment surface is found to be crucial to reproducing the correct intensities of the infrared spectrum of liquid water.
CHARACTERISTIC SIZE OF FLARE KERNELS IN THE VISIBLE AND NEAR-INFRARED CONTINUA
Xu, Yan; Jing, Ju; Wang, Haimin; Cao, Wenda
2012-05-01
In this Letter, we present a new approach to estimate the formation height of visible and near-infrared emission of an X10 flare. The sizes of flare emission cores in three wavelengths are accurately measured during the peak of the flare. The source size is the largest in the G band at 4308 A and shrinks toward longer wavelengths, namely the green continuum at 5200 A and NIR at 15600 A, where the emission is believed to originate from the deeper atmosphere. This size-wavelength variation is likely explained by the direct heating model as electrons need to move along converging field lines from the corona to the photosphere. Therefore, one can observe the smallest source, which in our case is 0.''65 {+-} 0.''02 in the bottom layer (represented by NIR), and observe relatively larger kernels in upper layers of 1.''03 {+-} 0.''14 and 1.''96 {+-} 0.''27, using the green continuum and G band, respectively. We then compare the source sizes with a simple magnetic geometry to derive the formation height of the white-light sources and magnetic pressure in different layers inside the flare loop.
Graph representation of protein free energy landscape
Li, Minghai; Duan, Mojie; Fan, Jue; Huo, Shuanghong; Han, Li
2013-11-14
The thermodynamics and kinetics of protein folding and protein conformational changes are governed by the underlying free energy landscape. However, the multidimensional nature of the free energy landscape makes it difficult to describe. We propose to use a weighted-graph approach to depict the free energy landscape with the nodes on the graph representing the conformational states and the edge weights reflecting the free energy barriers between the states. Our graph is constructed from a molecular dynamics trajectory and does not involve projecting the multi-dimensional free energy landscape onto a low-dimensional space defined by a few order parameters. The calculation of free energy barriers was based on transition-path theory using the MSMBuilder2 package. We compare our graph with the widely used transition disconnectivity graph (TRDG) which is constructed from the same trajectory and show that our approach gives more accurate description of the free energy landscape than the TRDG approach even though the latter can be organized into a simple tree representation. The weighted-graph is a general approach and can be used on any complex system.
Zhang, Y.; Easter, R. C.; Ghan, S. J.; Abdul-Razzak, H.
2002-11-07
We use a 1-D version of a climate-aerosol-chemistry model with both modal and sectional aerosol size representations to evaluate the impact of aerosol size representation on modeling aerosol-cloud interactions in shallow stratiform clouds observed during the 2nd Aerosol Characterization Experiment. Both the modal (with prognostic aerosol number and mass or prognostic aerosol number, surface area and mass, referred to as the Modal-NM and Modal-NSM) and the sectional approaches (with 12 and 36 sections) predict total number and mass for interstitial and activated particles that are generally within several percent of references from a high resolution 108-section approach. The modal approachmore » with prognostic aerosol mass but diagnostic number (referred to as the Modal-M) cannot accurately predict the total particle number and surface areas, with deviations from the references ranging from 7-161%. The particle size distributions are sensitive to size representations, with normalized absolute differences of up to 12% and 37% for the 36- and 12-section approaches, and 30%, 39%, and 179% for the Modal-NSM, Modal-NM, and Modal-M, respectively. For the Modal-NSM and Modal-NM, differences from the references are primarily due to the inherent assumptions and limitations of the modal approach. In particular, they cannot resolve the abrupt size transition between the interstitial and activated aerosol fractions. For the 12- and 36-section approaches, differences are largely due to limitations of the parameterized activation for non-log-normal size distributions, plus the coarse resolution for the 12-section case. Differences are larger both with higher aerosol (i.e., less complete activation) and higher SO2 concentrations (i.e., greater modification of the initial aerosol distribution).« less
Sinc function representation and three-loop master diagrams
Easther, Richard; Guralnik, Gerald; Hahn, Stephen
2001-04-15
We test the Sinc function representation, a novel method for numerically evaluating Feynman diagrams, by using it to evaluate the three-loop master diagrams. Analytical results have been obtained for all these diagrams, and we find excellent agreement between our calculations and the exact values. The Sinc function representation converges rapidly, and it is straightforward to obtain accuracies of 1 part in 10{sup 6} for these diagrams and with longer runs we found results better than 1 part in 10{sup 12}. Finally, this paper extends the Sinc function representation to diagrams containing massless propagators.
Regression analysis study on the carbon dioxide capture process
Zhou, Q.; Chan, C.W.; Tontiwachiwuthikul, P.
2008-07-15
Research on amine-based carbon dioxide (CO{sub 2}) capture has mainly focused on improving the effectiveness and efficiency of the CO{sub 2} capture process. The objective of our work is to explore relationships among key parameters that affect the CO{sub 2} production rate. From a survey of relevant literature, we observed that the significant parameters influencing the CO{sub 2} production rate include the reboiler heat duty, solvent concentration, solvent circulation rate, and CO{sub 2} lean loading. While it is widely recognized that these parameters are related, the exact nature of the relationships are unknown. This paper presents a regression study conducted with data collected at the International Test Center for CO{sub 2} capture (ITC) located at University of Regina, Saskatchewan, Canada. The regression technique was applied to a data set consisting of data on 113 days of operation of the CO{sub 2} capture plant, and four mathematical models of the key parameters have been developed. The models can be used for predicting the performance of the plant when changes occur in the process. By manipulation of the parameter values, the efficiency of the CO{sub 2} capture process can be improved.
Contescu, Cristian I
2006-01-01
This report supports the effort for development of small scale fabrication of UCO (a mixture of UO{sub 2} and UC{sub 2}) fuel kernels for the generation IV high temperature gas reactor program. In particular, it is focused on optimization of dispersion conditions of carbon black in the broths from which carbon-containing (UO{sub 2} {center_dot} H{sub 2}O + C) gel spheres are prepared by internal gelation. The broth results from mixing a hexamethylenetetramine (HMTA) and urea solution with an acid-deficient uranyl nitrate (ADUN) solution. Carbon black, which is previously added to one or other of the components, must stay dispersed during gelation. The report provides a detailed description of characterization efforts and results, aimed at identification and testing carbon black and surfactant combinations that would produce stable dispersions, with carbon particle sizes below 1 {micro}m, in aqueous HMTA/urea and ADUN solutions. A battery of characterization methods was used to identify the properties affecting the water dispersability of carbon blacks, such as surface area, aggregate morphology, volatile content, and, most importantly, surface chemistry. The report introduces the basic principles for each physical or chemical method of carbon black characterization, lists the results obtained, and underlines cross-correlations between methods. Particular attention is given to a newly developed method for characterization of surface chemical groups on carbons in terms of their acid-base properties (pK{sub a} spectra) based on potentiometric titration. Fourier-transform infrared (FTIR) spectroscopy was used to confirm the identity of surfactants, both ionic and non-ionic. In addition, background information on carbon black properties and the mechanism by which surfactants disperse carbon black in water is also provided. A list of main physical and chemical properties characterized, samples analyzed, and results obtained, as well as information on the desired trend or
OPTICAL SPECTRAL OBSERVATIONS OF A FLICKERING WHITE-LIGHT KERNEL IN A C1 SOLAR FLARE
Kowalski, Adam F.; Cauzzi, Gianna; Fletcher, Lyndsay
2015-01-10
We analyze optical spectra of a two-ribbon, long-duration C1.1 flare that occurred on 2011 August 18 within AR 11271 (SOL2011-08-18T15:15). The impulsive phase of the flare was observed with a comprehensive set of space-borne and ground-based instruments, which provide a range of unique diagnostics of the lower flaring atmosphere. Here we report the detection of enhanced continuum emission, observed in low-resolution spectra from 3600 Å to 4550 Å acquired with the Horizontal Spectrograph at the Dunn Solar Telescope. A small, ≤0.''5 (10{sup 15} cm{sup 2}) penumbral/umbral kernel brightens repeatedly in the optical continuum and chromospheric emission lines, similar to the temporal characteristics of the hard X-ray variation as detected by the Gamma-ray Burst Monitor on the Fermi spacecraft. Radiative-hydrodynamic flare models that employ a nonthermal electron beam energy flux high enough to produce the optical contrast in our flare spectra would predict a large Balmer jump in emission, indicative of hydrogen recombination radiation from the upper flare chromosphere. However, we find no evidence of such a Balmer jump in the bluemost spectral region of the continuum excess. Just redward of the expected Balmer jump, we find evidence of a ''blue continuum bump'' in the excess emission which may be indicative of the merging of the higher order Balmer lines. The large number of observational constraints provides a springboard for modeling the blue/optical emission for this particular flare with radiative-hydrodynamic codes, which are necessary to understand the opacity effects for the continuum and emission line radiation at these wavelengths.
Impact of aerosol size representation on modeling aerosol-cloud...
Office of Scientific and Technical Information (OSTI)
SciTech Connect Search Results Journal Article: Impact of aerosol size representation on ... OSTI Identifier: 15003527 Report Number(s): PNWD-SA--5600 Journal ID: ISSN 0148-0227 ...
Simple Model Representations of Transport in a Complex Fracture...
Office of Scientific and Technical Information (OSTI)
Effects on Long-Term Predictions Citation Details In-Document Search Title: Simple Model Representations of Transport in a Complex Fracture and Their Effects on Long-Term ...
Highest-weight representations of Brocherd`s algebras
Slansky, R.
1997-01-01
General features of highest-weight representations of Borcherd`s algebras are described. to show their typical features, several representations of Borcherd`s extensions of finite-dimensional algebras are analyzed. Then the example of the extension of affine- su(2) to a Borcherd`s algebra is examined. These algebras provide a natural way to extend a Kac-Moody algebra to include the hamiltonian and number-changing operators in a generalized symmetry structure.
Advancing Clouds Lifecycle Representation in Numerical Models Using
Office of Scientific and Technical Information (OSTI)
Innovative Analysis Methods that Bridge ARM Observations and Models Over a Breadth of Scales (Technical Report) | SciTech Connect Technical Report: Advancing Clouds Lifecycle Representation in Numerical Models Using Innovative Analysis Methods that Bridge ARM Observations and Models Over a Breadth of Scales Citation Details In-Document Search Title: Advancing Clouds Lifecycle Representation in Numerical Models Using Innovative Analysis Methods that Bridge ARM Observations and Models Over a
Advancing cloud lifecycle representation in numerical models using
Office of Scientific and Technical Information (OSTI)
innovative analysis methods that bridge arm observations over a breadth of scales (Technical Report) | SciTech Connect Advancing cloud lifecycle representation in numerical models using innovative analysis methods that bridge arm observations over a breadth of scales Citation Details In-Document Search Title: Advancing cloud lifecycle representation in numerical models using innovative analysis methods that bridge arm observations over a breadth of scales From its location on the
Model representations of aerosol layers transported from North America over
Office of Scientific and Technical Information (OSTI)
the Atlantic Ocean during the Two-Column Aerosol Project (Journal Article) | DOE PAGES Model representations of aerosol layers transported from North America over the Atlantic Ocean during the Two-Column Aerosol Project This content will become publicly available on August 22, 2017 Title: Model representations of aerosol layers transported from North America over the Atlantic Ocean during the Two-Column Aerosol Project The ability of the Weather Research and Forecasting model with chemistry
Representation of Limited Rights Data and Restricted Computer Software |
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Department of Energy Representation of Limited Rights Data and Restricted Computer Software Representation of Limited Rights Data and Restricted Computer Software Any data delivered under an award resulting from this announcement is subject to the Rights in Data - General or the Rights in Data - Programs Covered Under Special Data Statutes clause (See Intellectual Property Provisions). Under these clauses, the Recipient may withhold from delivery data that qualify as limited rights data or
A Visual Analytics Approach for Correlation, Classification, and Regression Analysis
Steed, Chad A; SwanII, J. Edward; Fitzpatrick, Patrick J.; Jankun-Kelly, T.J.
2012-02-01
New approaches that combine the strengths of humans and machines are necessary to equip analysts with the proper tools for exploring today's increasing complex, multivariate data sets. In this paper, a novel visual data mining framework, called the Multidimensional Data eXplorer (MDX), is described that addresses the challenges of today's data by combining automated statistical analytics with a highly interactive parallel coordinates based canvas. In addition to several intuitive interaction capabilities, this framework offers a rich set of graphical statistical indicators, interactive regression analysis, visual correlation mining, automated axis arrangements and filtering, and data classification techniques. The current work provides a detailed description of the system as well as a discussion of key design aspects and critical feedback from domain experts.
Gonis, A.; Zhang, X. G.; Stocks, G. M.; Nicholson, D. M.
2015-10-23
Density functional theory for the case of general, N-representable densities is reformulated in terms of density functional derivatives of expectation values of operators evaluated with wave functions leading to a density, making no reference to the concept of potential. The developments provide a complete solution of the v-representability problem by establishing a mathematical procedure that determines whether a density is v-representable and in the case of an affirmative answer determines the potential (within an additive constant) as a derivative with respect to the density of a constrained search functional. It also establishes the existence of an energy functional of themore » density that, for v-representable densities, assumes its minimum value at the density describing the ground state of an interacting many-particle system. The theorems of Hohenberg and Kohn emerge as special cases of the formalism.« less
Gonis, A.; Zhang, X. G.; Stocks, G. M.; Nicholson, D. M.
2015-10-23
Density functional theory for the case of general, N-representable densities is reformulated in terms of density functional derivatives of expectation values of operators evaluated with wave functions leading to a density, making no reference to the concept of potential. The developments provide a complete solution of the v-representability problem by establishing a mathematical procedure that determines whether a density is v-representable and in the case of an affirmative answer determines the potential (within an additive constant) as a derivative with respect to the density of a constrained search functional. It also establishes the existence of an energy functional of the density that, for v-representable densities, assumes its minimum value at the density describing the ground state of an interacting many-particle system. The theorems of Hohenberg and Kohn emerge as special cases of the formalism.
Patrick, Christopher E. Thygesen, Kristian S.
2015-09-14
We present calculations of the correlation energies of crystalline solids and isolated systems within the adiabatic-connection fluctuation-dissipation formulation of density-functional theory. We perform a quantitative comparison of a set of model exchange-correlation kernels originally derived for the homogeneous electron gas (HEG), including the recently introduced renormalized adiabatic local-density approximation (rALDA) and also kernels which (a) satisfy known exact limits of the HEG, (b) carry a frequency dependence, or (c) display a 1/k{sup 2} divergence for small wavevectors. After generalizing the kernels to inhomogeneous systems through a reciprocal-space averaging procedure, we calculate the lattice constants and bulk moduli of a test set of 10 solids consisting of tetrahedrally bonded semiconductors (C, Si, SiC), ionic compounds (MgO, LiCl, LiF), and metals (Al, Na, Cu, Pd). We also consider the atomization energy of the H{sub 2} molecule. We compare the results calculated with different kernels to those obtained from the random-phase approximation (RPA) and to experimental measurements. We demonstrate that the model kernels correct the RPA’s tendency to overestimate the magnitude of the correlation energy whilst maintaining a high-accuracy description of structural properties.
Bansal, R.M.; Kothari, L.S.; Tewari, S.P.
1980-10-01
A new scattering kernel for heavy water has been proposed. The kernel takes into account the chemical binding energy effects and also includes the rotational and intramolecular vibrational modes. Using this scattering kernel, various neutron transport processes in the temperature range 5 to 60/sup 0/C have been studied and compared with the corresponding experimental results. The calculated results include total neutron scattering cross section at 20/sup 0/C; asymptotic decay of neutron pulses in the temperature range 5 to 60/sup 0/C and temperature variation of the diffusion coefficient and diffusion cooling coefficient; timedependent spectra inside finite-sized assemblies of heavy water at 20 and 43.3/sup 0/C thermalization time; and diffusion length and space-dependent study in pure and poisoned assemblies of heavy water. The calculated results are in good agreement with the experimental results. At some places notable differences are observed between the results obtained using our scattering kernel and those based on the Honeck kernel.
Support Vector Machine algorithm for regression and classification
Energy Science and Technology Software Center (OSTI)
2001-08-01
The software is an implementation of the Support Vector Machine (SVM) algorithm that was invented and developed by Vladimir Vapnik and his co-workers at AT&T Bell Laboratories. The specific implementation reported here is an Active Set method for solving a quadratic optimization problem that forms the major part of any SVM program. The implementation is tuned to specific constraints generated in the SVM learning. Thus, it is more efficient than general-purpose quadratic optimization programs. Amore » decomposition method has been implemented in the software that enables processing large data sets. The size of the learning data is virtually unlimited by the capacity of the computer physical memory. The software is flexible and extensible. Two upper bounds are implemented to regulate the SVM learning for classification, which allow users to adjust the false positive and false negative rates. The software can be used either as a standalone, general-purpose SVM regression or classification program, or be embedded into a larger software system.« less
Regression analysis of technical parameters affecting nuclear power plant performances
Ghazy, R.; Ricotti, M. E.; Trueco, P.
2012-07-01
Since the 80's many studies have been conducted in order to explicate good and bad performances of commercial nuclear power plants (NPPs), but yet no defined correlation has been found out to be totally representative of plant operational experience. In early works, data availability and the number of operating power stations were both limited; therefore, results showed that specific technical characteristics of NPPs were supposed to be the main causal factors for successful plant operation. Although these aspects keep on assuming a significant role, later studies and observations showed that other factors concerning management and organization of the plant could instead be predominant comparing utilities operational and economic results. Utility quality, in a word, can be used to summarize all the managerial and operational aspects that seem to be effective in determining plant performance. In this paper operational data of a consistent sample of commercial nuclear power stations, out of the total 433 operating NPPs, are analyzed, mainly focusing on the last decade operational experience. The sample consists of PWR and BWR technology, operated by utilities located in different countries, including U.S. (Japan)) (France)) (Germany)) and Finland. Multivariate regression is performed using Unit Capability Factor (UCF) as the dependent variable; this factor reflects indeed the effectiveness of plant programs and practices in maximizing the available electrical generation and consequently provides an overall indication of how well plants are operated and maintained. Aspects that may not be real causal factors but which can have a consistent impact on the UCF, as technology design, supplier, size and age, are included in the analysis as independent variables. (authors)
Category of trees in representation theory of quantum algebras
Moskaliuk, N. M.; Moskaliuk, S. S.
2013-10-15
New applications of categorical methods are connected with new additional structures on categories. One of such structures in representation theory of quantum algebras, the category of Kuznetsov-Smorodinsky-Vilenkin-Smirnov (KSVS) trees, is constructed, whose objects are finite rooted KSVS trees and morphisms generated by the transition from a KSVS tree to another one.
On the representation of many-body interactions in water
Medders, Gregory R.; Gotz, Andreas W.; Morales, Miguel A.; Bajaj, Pushp; Paesani, Francesco
2015-09-09
Our recent work has shown that the many-body expansion of the interactionenergy can be used to develop analytical representations of global potential energy surfaces (PESs) for water. In this study, the role of short- and long-range interactions at different orders is investigated by analyzing water potentials that treat the leading terms of the many-body expansion through implicit (i.e., TTM3-F and TTM4-F PESs) and explicit (i.e., WHBB and MB-pol PESs) representations. Moreover, it is found that explicit short-range representations of 2-body and 3-body interactions along with a physically correct incorporation of short- and long-range contributions are necessary for an accurate representationmore » of the waterinteractions from the gas to the condensed phase. Likewise, a complete many-body representation of the dipole moment surface is found to be crucial to reproducing the correct intensities of the infrared spectrum of liquid water.« less
SEP Request for Approval Form 3 - Other Complex Regression Model Rationale
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
| Department of Energy 3 - Other Complex Regression Model Rationale SEP Request for Approval Form 3 - Other Complex Regression Model Rationale SEP-Request-for-Approval-Form-3_Other-Complex-Regression-Model-Rationale.docx (36.53 KB) More Documents & Publications Superior Energy Performance Enrollment and Application Forms SEP Request for Approval Form 5 - Model Does Not Satisfy 3.4.1-3.4.10 Requirements SEP Request for Approval Form 1 - Modeling of Data at Finer Intervals than Week
The Environmental Management Site-Specific Advisory Board recommends that DOE develop graphic representations of waste disposition paths.
Representation of Solar Capacity Value in the ReEDS Capacity Expansion Model
U.S. Department of Energy (DOE) all webpages (Extended Search)
Department of Energy Representation of Limited Rights Data and Restricted Computer Software Representation of Limited Rights Data and Restricted Computer Software Representation of Limited Rights Data and Restricted Computer Software (44.02 KB) More Documents & Publications CLB-1003.PDF� Intellectual Property Provisions (CSB-1003) Cooperative Agreement Research, Development, or Demonstration Domestic Small Businesses CDLB-1003.PDF�
Representation of Solar Capacity Value
Discrete physics: Practice, representation and rules of correspondence
Noyes, H.P.
1988-07-01
We make a brief historical review of some aspects of modern physics which we find most significant in our own endeavor. We discuss the ''Yukawa Vertices'' of elementary particle theory as used in laboratory practice, second quantized field theory, analytic S-Matrix theory and in our own approach. We review the conserved quantum numbers in the Standard Model of quarks and leptons. This concludes our presentation of the ''E-frame.'' We try to develop a self-consistent representation of our theory. We have already claimed that this approach provides a discrete reconciliation between the formal (representational) aspects of quantum mechanics and relativity. Also discussed are rules of correspondence connecting the formalism to the practice of physics by using the counter paradigm and event-based coordinates to construct relativistic quantum mechanics in a new way. 31 refs., 12 figs., 1 tab.
Representation of Limited Rights Data and Restricted Computer Software
U.S. Department of Energy (DOE) all webpages (Extended Search)
REPRESENTATION OF LIMITED RIGHTS DATA AND RESTRICTED COMPUTER SOFTWARE Applicant: Funding Opportunity Announcement/Solicitation No.: (a) Any data delivered under an award resulting from this announcement is subject to the Rights in Data - General or the Rights in Data - Programs Covered Under Special Data Statutes clause (See Intellectual Property Provisions). Under these clauses, the Recipient may withhold from delivery data that qualify as limited rights data or restricted computer software.
The Institute for Public Representation, on behalf of the Potomac
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Riverkeeper, Inc., the Patuxent Riverkeeper, and the Anacostia Riverkeeper at Earth Conservation Corps Comments on Department of Energy's Special Environmental Analysis Regarding Operatio | Department of Energy The Institute for Public Representation, on behalf of the Potomac Riverkeeper, Inc., the Patuxent Riverkeeper, and the Anacostia Riverkeeper at Earth Conservation Corps Comments on Department of Energy's Special Environmental Analysis Regarding Operatio The Institute for Public
EPACT Representation for Covered Awards Over $100,000 | Department of
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Energy EPACT Representation for Covered Awards Over $100,000 EPACT Representation for Covered Awards Over $100,000 EPACT Representation (65.06 KB) More Documents & Publications 2007 Annual Plan 2007 Annual Plan for the Ultra-Deepwater and Unconventional Natural Gas and Other Petroleum Resources Research and Development Program 2008 Annual Plan
Representation of analysis results involving aleatory and epistemic uncertainty.
Johnson, Jay Dean; Helton, Jon Craig; Oberkampf, William Louis; Sallaberry, Cedric J.
2008-08-01
Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behavior of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary cumulative distribution functions (CCDFs) for analysis results of interest. Several mathematical structures are available for the representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (i.e., interval analysis, possibility theory, evidence theory, probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterizations of epistemic uncertainty.
Bag of Lines (BoL) for Improved Aerial Scene Representation
Sridharan, Harini; Cheriyadat, Anil M.
2014-09-22
Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less
Regression Models for Demand Reduction based on Cluster Analysis of Load Profiles
Yamaguchi, Nobuyuki; Han, Junqiao; Ghatikar, Girish; Piette, Mary Ann; Asano, Hiroshi; Kiliccote, Sila
2009-06-28
This paper provides new regression models for demand reduction of Demand Response programs for the purpose of ex ante evaluation of the programs and screening for recruiting customer enrollment into the programs. The proposed regression models employ load sensitivity to outside air temperature and representative load pattern derived from cluster analysis of customer baseline load as explanatory variables. The proposed models examined their performances from the viewpoint of validity of explanatory variables and fitness of regressions, using actual load profile data of Pacific Gas and Electric Company's commercial and industrial customers who participated in the 2008 Critical Peak Pricing program including Manual and Automated Demand Response.
A role of chemical kinetics in the simulation of the reaction kernel of methane jet diffusion flames
Takahashi, Fumiaki; Katta, V.R.
1999-07-01
The detailed structure of the stabilizing region of an axisymmetric laminar methane jet diffusion flame has been studied numerically. Computations using a time-dependent, implicit, third-order accurate numerical scheme with buoyancy effects were performed using two different C{sub 2}-chemistry models and compared with the previous results using a C{sub 1}-chemistry model. The results were nearly identical for all kinetic models except that the C{sub 1}-chemistry model over-predicted the methyl-radical and formaldehyde concentrations on the fuel side of the flame and that the standoff distance of the flame base from the burner rim varied. The standoff distance was sensitive to the CH{sub 3} + H + (M) {yields} CH{sub 4} + (M) reaction. The highest reactivity spot (reaction kernel) was formed in the relatively low-temperature (<1,600 K) flame base, where the CH{sub 3} + O {yields} CH{sub 2}O + H reaction predominantly contributed to the heat release, providing a stationary ignition source to incoming reactants and thereby stabilizing the trailing diffusion flame.
Kornilov, Oleg; Toennies, J. Peter
2015-02-21
The size distribution of para-H{sub 2} (pH{sub 2}) clusters produced in free jet expansions at a source temperature of T{sub 0} = 29.5 K and pressures of P{sub 0} = 0.91.96 bars is reported and analyzed according to a cluster growth model based on the Smoluchowski theory with kernel scaling. Good overall agreement is found between the measured and predicted, N{sub k} = A?k{sup a} e{sup ?bk}, shape of the distribution. The fit yields values for A and b for values of a derived from simple collision models. The small remaining deviations between measured abundances and theory imply a (pH{sub 2}){sub k} magic number cluster of k = 13 as has been observed previously by Raman spectroscopy. The predicted linear dependence of b{sup ?(a+1)} on source gas pressure was verified and used to determine the value of the basic effective agglomeration reaction rate constant. A comparison of the corresponding effective growth cross sections ?{sub 11} with results from a similar analysis of He cluster size distributions indicates that the latter are much larger by a factor 6-10. An analysis of the three body recombination rates, the geometric sizes and the fact that the He clusters are liquid independent of their size can explain the larger cross sections found for He.
Collins, John; Rogers, Ted
2015-04-01
There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part of the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.
Collins, John; Rogers, Ted
2015-04-01
There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part ofmore » the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.« less
Review of structure representation and reconstruction on mesoscale and microscale
Li, Dongsheng
2014-05-01
Structure representation and reconstruction on mesoscale and microscale is critical in material design, advanced manufacturing and multiscale modeling. Microstructure reconstruction has been applied in different areas of materials science and technology, structural materials, energy materials, geology, hydrology, etc. This review summarizes the microstructure descriptors and formulations used to represent and algorithms to reconstruct structures at microscale and mesoscale. In the stochastic methods using correlation function, different optimization approaches have been adapted for objective function minimization. A variety of reconstruction approaches are compared in efficiency and accuracy.
Direct Angular Representation Monte Carlo Code for Criticality Safety Analysis
Energy Science and Technology Software Center (OSTI)
1988-01-01
Version 00 MKENO-DAR calculates the effective neutron multiplication factor and neutron flux distribution in a three dimensional media, solving multigroup neutron transport equation with a precise angular distribution function for neutron scattering. MKENO-DAR was developed from CCC-492/MULTI-KENO which was developed from KENO-IV. MULTI-KENO divides the system into many subsystem SUPER BOXES where the size of BOX TYPEs in each SUPER BOX can be selected independently. MKENO-DAR improves the representation of scattering angle over that inmore » MULTI-KENO.« less
Modeling Personalized Email Prioritization: Classification-based and Regression-based Approaches
Yoo S.; Yang, Y.; Carbonell, J.
2011-10-24
Email overload, even after spam filtering, presents a serious productivity challenge for busy professionals and executives. One solution is automated prioritization of incoming emails to ensure the most important are read and processed quickly, while others are processed later as/if time permits in declining priority levels. This paper presents a study of machine learning approaches to email prioritization into discrete levels, comparing ordinal regression versus classier cascades. Given the ordinal nature of discrete email priority levels, SVM ordinal regression would be expected to perform well, but surprisingly a cascade of SVM classifiers significantly outperforms ordinal regression for email prioritization. In contrast, SVM regression performs well -- better than classifiers -- on selected UCI data sets. This unexpected performance inversion is analyzed and results are presented, providing core functionality for email prioritization systems.
Deng, Yangyang; Parajuli, Prem B.
2011-08-10
Evaluation of economic feasibility of a bio-gasification facility needs understanding of its unit cost under different production capacities. The objective of this study was to evaluate the unit cost of syngas production at capacities from 60 through 1800Nm 3/h using an economic model with three regression analysis techniques (simple regression, reciprocal regression, and log-log regression). The preliminary result of this study showed that reciprocal regression analysis technique had the best fit curve between per unit cost and production capacity, with sum of error squares (SES) lower than 0.001 and coefficient of determination of (R 2) 0.996. The regression analysis techniques determined the minimum unit cost of syngas production for micro-scale bio-gasification facilities of $0.052/Nm 3, under the capacity of 2,880 Nm 3/h. The results of this study suggest that to reduce cost, facilities should run at a high production capacity. In addition, the contribution of this technique could be the new categorical criterion to evaluate micro-scale bio-gasification facility from the perspective of economic analysis.
Representation of integral dispersion relations by local forms
Ferreira, Erasmo; Sesma, Javier
2008-03-15
The representation of the usual integral dispersion relations (IDRs) of scattering theory through series of derivatives of the amplitudes is discussed, extended, simplified, and confirmed as mathematical identities. Forms of derivative dispersion relations (DDRs) valid for the whole energy interval, recently obtained and presented as double infinite series, are simplified through the use of new sum rules of the incomplete {gamma} functions, being reduced to single summations, where the usual convergence criteria are easily applied. For the forms of the imaginary amplitude used in phenomenology of hadronic scattering at high energies, we show that expressions for the DDR can represent, with absolute accuracy, the IDR of scattering theory, as true mathematical identities. Besides the fact that the algebraic manipulation can be easily understood, numerical examples show the accuracy of these representations up to the maximum available machine precision. As consequence of our work, it is concluded that the standard forms, sDDR, originally intended for high energy limits are an inconvenient and incomplete separation of terms of the full expression, leading to wrong evaluations. Since the correspondence between IDR and the DDR expansions is linear, our results have wide applicability, covering more general functions, built as combinations of well studied basic forms.
Braids as a representation space of SU(5)
Cartin, Daniel
2015-06-15
The standard model of particle physics provides very accurate predictions of phenomena occurring at the sub-atomic level, but the reason for the choice of symmetry group and the large number of particles considered elementary is still unknown. Along the lines of previous preon models positing a substructure to explain these aspects, Bilson-Thompson showed how the first family of elementary particles is realized as the crossings of braids made of three strands, with charges resulting from twists of those strands with certain conditions; in this topological model, there are only two distinct neutrino states. Modeling the particles as braids implies these braids must be the representation space of a Lie algebra, giving the symmetries of the standard model. In this paper, this representation is made explicit, obtaining the raising operators associated with the Lie algebra of SU(5), one of the earliest grand unified theories. Because the braids form a group, the action of these operators are braids themselves, leading to their identification as gauge bosons. Possible choices for the other two families are also given. Although this realization of particles as braids is lacking a dynamical framework, it is very suggestive, especially when considered as a natural method of adding matter to loop quantum gravity.
Improving the representation of hydrologic processes in Earth System Models
Clark, Martyn P.; Fan, Ying; Lawrence, David M.; Adam, J. C.; Bolster, Diogo; Gochis, David; Hooper, Richard P.; Kumar, Mukesh; Leung, Lai-Yung R.; Mackay, D. Scott; Maxwell, Reed M.; Shen, Chaopeng; Swenson, Sean C.; Zeng, Xubin
2015-08-21
Many of the scientific and societal challenges in understanding and preparing for global environmental change rest upon our ability to understand and predict the water cycle change at large river basin, continent, and global scales. However, current large-scale models, such as the land components of Earth System Models (ESMs), do not yet represent the terrestrial water cycle in a fully integrated manner or resolve the finer-scale processes that can dominate large-scale water budgets. This paper reviews the current representation of hydrologic processes in ESMs and identifies the key opportunities for improvement. This review suggests that (1) the development of ESMs has not kept pace with modeling advances in hydrology, both through neglecting key processes (e.g., groundwater) and neglecting key aspects of spatial variability and hydrologic connectivity; and (2) many modeling advances in hydrology can readily be incorporated into ESMs and substantially improve predictions of the water cycle. Accelerating modeling advances in ESMs requires comprehensive hydrologic benchmarking activities, in order to systematically evaluate competing modeling alternatives, understand model weaknesses, and prioritize model development needs. This demands stronger collaboration, both through greater engagement of hydrologists in ESM development and through more detailed evaluation of ESM processes in research watersheds. Advances in the representation of hydrologic process in ESMs can substantially improve energy, carbon and nutrient cycle prediction capabilities through the fundamental role the water cycle plays in regulating these cycles.
Method of Equivalencing for a Large Wind Power Plant with Multiple Turbine Representation:
Muljadi, E; Pasupulati, S.; Ellis, A.; Kosterov, D.
2008-07-01
This paper focuses on efforts to develop an equivalent representation of a Wind Power Plant (WPP) collector system for power system planning studies.
Method of Equivalencing for a Large Wind Power Plant with Multiple Turbine Representation: Preprint
Muljadi, E.; Pasupulati, S.; Ellis, A.; Kosterov, D.
2008-07-01
This paper focuses on our effort to develop an equivalent representation of a Wind Power Plant collector system for power system planning studies.
Orbit-product representation and correction of Gaussian belief propagation
Johnson, Jason K; Chertkov, Michael; Chernyak, Vladimir
2009-01-01
We present a new interpretation of Gaussian belief propagation (GaBP) based on the 'zeta function' representation of the determinant as a product over orbits of a graph. We show that GaBP captures back-tracking orbits of the graph and consider how to correct this estimate by accounting for non-backtracking orbits. We show that the product over non-backtracking orbits may be interpreted as the determinant of the non-backtracking adjacency matrix of the graph with edge weights based on the solution of GaBP. An efficient method is proposed to compute a truncated correction factor including all non-backtracking orbits up to a specified length.
Graphics processing units accelerated semiclassical initial value representation molecular dynamics
Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele
2014-05-07
This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.
Local representation of the electronic dielectric response function
Lu, Deyu; Ge, Xiaochuan
2015-12-11
We present a local representation of the electronic dielectric response function, based on a spatial partition of the dielectric response into contributions from each occupied Wannier orbital using a generalized density functional perturbation theory. This procedure is fully ab initio, and therefore allows us to rigorously define local metrics, such as “bond polarizability,” on Wannier centers. We show that the locality of the bare response function is determined by the locality of three quantities: Wannier functions of the occupied manifold, the density matrix, and the Hamiltonian matrix. Furthermore, in systems with a gap, the bare dielectric response is exponentially localized, which supports the physical picture of the dielectric response function as a collection of interacting local responses that can be captured by a tight-binding model.
Local representation of the electronic dielectric response function
Lu, Deyu; Ge, Xiaochuan
2015-12-11
We present a local representation of the electronic dielectric response function, based on a spatial partition of the dielectric response into contributions from each occupied Wannier orbital using a generalized density functional perturbation theory. This procedure is fully ab initio, and therefore allows us to rigorously define local metrics, such as “bond polarizability,” on Wannier centers. We show that the locality of the bare response function is determined by the locality of three quantities: Wannier functions of the occupied manifold, the density matrix, and the Hamiltonian matrix. Furthermore, in systems with a gap, the bare dielectric response is exponentially localized,more » which supports the physical picture of the dielectric response function as a collection of interacting local responses that can be captured by a tight-binding model.« less
Coupling coefficients for tensor product representations of quantum SU(2)
Groenevelt, Wolter
2014-10-15
We study tensor products of infinite dimensional irreducible {sup *}-representations (not corepresentations) of the SU(2) quantum group. We obtain (generalized) eigenvectors of certain self-adjoint elements using spectral analysis of Jacobi operators associated to well-known q-hypergeometric orthogonal polynomials. We also compute coupling coefficients between different eigenvectors corresponding to the same eigenvalue. Since the continuous spectrum has multiplicity two, the corresponding coupling coefficients can be considered as 2 2-matrix-valued orthogonal functions. We compute explicitly the matrix elements of these functions. The coupling coefficients can be considered as q-analogs of Bessel functions. As a results we obtain several q-integral identities involving q-hypergeometric orthogonal polynomials and q-Bessel-type functions.
Enhancement of Solar Energy Representation in the GCAM Model
Smith, Steven J.; Volke, April C.; Delgado Arias, Sabrina
2010-02-01
The representation of solar technologies in a research version of the GCAM (formerly MiniCAM) integrated assessment model have been enhanced to add technologies, improve the underlying data, and improve the interaction with the rest of the model. We find that the largest potential impact from the inclusion of thermal Concentrating Solar Power plants, which supply a substantial portion of electric generation in sunny regions of the world. Drawing on NREL research, domestic Solar Hot Water technologies have also been added in the United States region where this technology competes with conventional electric and gas technologies. PV technologies are as implemented in the CCTP scenarios, drawing on NREL cost curves for the United States, extrapolated to other world regions using a spatial analysis of population and solar resources.
Kusumawati, Intan; Marwoto, Putut Linuwih, Suharto
2015-09-30
The ability of multi representation has been widely studied, but there has been no implementation through a model of learning. This study aimed to determine the ability of the students multi representation, relationships multi representation capabilities and oral communication skills, as well as the application of the relations between the two capabilities through learning model Presentatif Based on Multi representation (PBM) in solving optical geometric (Elementary Physics II). A concurrent mixed methods research methods with qualitative–quantitative weights. Means of collecting data in the form of the pre-test and post-test with essay form, observation sheets oral communication skills, and assessment of learning by observation sheet PBM–learning models all have a high degree of respectively validity category is 3.91; 4.22; 4.13; 3.88. Test reliability with Alpha Cronbach technique, reliability coefficient of 0.494. The students are department of Physics Education Unnes as a research subject. Sequence multi representation tendency of students from high to low in sequence, representation of M, D, G, V; whereas the order of accuracy, the group representation V, D, G, M. Relationship multi representation ability and oral communication skills, comparable/proportional. Implementation conjunction generate grounded theory. This study should be applied to the physics of matter, or any other university for comparison.
Energy Science and Technology Software Center (OSTI)
2004-03-01
A package of classes for constructing and using distributed sparse and dense matrices, vectors and graphs, written in Java. Jpetra is intended to provide the foundation for basic matrix and vector operations for Java developers. Jpetra provides distributed memory operations via an abstract parallel machine interface. The most common implementation of this interface will be Java sockets.
Energy Science and Technology Software Center (OSTI)
2004-03-01
A package of classes for constructing and using distributed sparse and dense matrices, vectors and graphs. Templated on the scalar and ordinal types so that any valid floating-point type, as well as any valid integer type can be used with these classes. Other non-standard types, such as 3-by-3 matrices for the scalar type and mod-based integers for ordinal types, can also be used. Tpetra is intended to provide the foundation for basic matrix and vectormore » operations for the next generation of Trilinos preconditioners and solvers, It can be considered as the follow-on to Epetra. Tpetra provides distributed memory operations via an abstract parallel machine interface, The most common implementation of this interface will be MPI.« less
2011-01-01
Check out this robotics breakthrough which allows robots to behave autonomously. For more information about INL research projects, visit http://www.facebook.com/idahonationallaboratory.
Physics Integration KErnels (PIKE)
Energy Science and Technology Software Center (OSTI)
2014-07-31
Pike is a software library for coupling and solving multiphysics applications. It provides basic interfaces and utilities for performing code-to-code coupling. It provides simple black-box Picard iteration methods for solving the coupled system of equations including Jacobi and Gauss-Seidel solvers. Pike was developed originally to couple neutronics and thermal fluids codes to simulate a light water nuclear reactor for the Consortium for Simulation of Light-water Reactors (CASL) DOE Energy Innovation Hub. The Pike library containsmore » no physics and just provides interfaces and utilities for coupling codes. It will be released open source under a BSD license as part of the Trilinos solver framework (trilinos.org) which is also BSD. This code provides capabilities similar to other open source multiphysics coupling libraries such as LIME, AMP, and MOOSE.« less
None
2016-07-12
Check out this robotics breakthrough which allows robots to behave autonomously. For more information about INL research projects, visit http://www.facebook.com/idahonationallaboratory.
STUDIES IN ASTRONOMICAL TIME SERIES ANALYSIS. VI. BAYESIAN BLOCK REPRESENTATIONS
Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James
2013-02-20
This paper addresses the problem of detecting and characterizing local variability in time series and other forms of sequential data. The goal is to identify and characterize statistically significant variations, at the same time suppressing the inevitable corrupting observational errors. We present a simple nonparametric modeling technique and an algorithm implementing it-an improved and generalized version of Bayesian Blocks-that finds the optimal segmentation of the data in the observation interval. The structure of the algorithm allows it to be used in either a real-time trigger mode, or a retrospective mode. Maximum likelihood or marginal posterior functions to measure model fitness are presented for events, binned counts, and measurements at arbitrary times with known error distributions. Problems addressed include those connected with data gaps, variable exposure, extension to piecewise linear and piecewise exponential representations, multivariate time series data, analysis of variance, data on the circle, other data modes, and dispersed data. Simulations provide evidence that the detection efficiency for weak signals is close to a theoretical asymptotic limit derived by Arias-Castro et al. In the spirit of Reproducible Research all of the code and data necessary to reproduce all of the figures in this paper are included as supplementary material.
Real-space representation of electron correlation in ?-conjugated systems
Wang, Jian E-mail: e.j.baerends@vu.nl; Baerends, Evert Jan E-mail: e.j.baerends@vu.nl
2015-05-28
?-electron conjugation and aromaticity are commonly associated with delocalization and especially high mobility of the ? electrons. We investigate if also the electron correlation (pair density) exhibits signatures of the special electronic structure of conjugated systems. To that end the shape and extent of the pair density and derived quantities (exchange-correlation hole, Coulomb hole, and conditional density) are investigated for the prototype systems ethylene, hexatriene, and benzene. The answer is that the effects of ? electron conjugation are hardly discernible in the real space representations of the electron correlation. We find the xc hole to be as localized (confined to atomic or diatomic regions) in conjugated systems as in small molecules. This result is relevant for density functional theory (DFT). The potential of the electron exchange-correlation hole is the largest part of v{sub xc}, the exchange-correlation Kohn-Sham potential. So the extent of the hole directly affects the orbital energies of both occupied and unoccupied Kohn-Sham orbitals and therefore has direct relevance for the excitation spectrum as calculated with time-dependent DFT calculations. The potential of the localized xc hole is comparatively more attractive than the actual hole left behind by an electron excited from a delocalized molecular orbital of a conjugated system.
Knowledge Representation Issues in Semantic Graphs for Relationship Detection
Barthelemy, M; Chow, E; Eliassi-Rad, T
2005-02-02
An important task for Homeland Security is the prediction of threat vulnerabilities, such as through the detection of relationships between seemingly disjoint entities. A structure used for this task is a ''semantic graph'', also known as a ''relational data graph'' or an ''attributed relational graph''. These graphs encode relationships as typed links between a pair of typed nodes. Indeed, semantic graphs are very similar to semantic networks used in AI. The node and link types are related through an ontology graph (also known as a schema). Furthermore, each node has a set of attributes associated with it (e.g., ''age'' may be an attribute of a node of type ''person''). Unfortunately, the selection of types and attributes for both nodes and links depends on human expertise and is somewhat subjective and even arbitrary. This subjectiveness introduces biases into any algorithm that operates on semantic graphs. Here, we raise some knowledge representation issues for semantic graphs and provide some possible solutions using recently developed ideas in the field of complex networks. In particular, we use the concept of transitivity to evaluate the relevance of individual links in the semantic graph for detecting relationships. We also propose new statistical measures for semantic graphs and illustrate these semantic measures on graphs constructed from movies and terrorism data.
A survey on application of representation theory to molecular vibration
Prakasa, Yohenry E-mail: ntan@math.itb.ac.id; Muchtadi-Alamsyah, Intan E-mail: ntan@math.itb.ac.id
2014-03-24
Representations Theory is used extensively in many of the physical sciences as every physical system has a symmetry group G. Various differential equations determine the vibration of a molecule, and the symmetry group of the molecule acts on the space of solutions of these equations. In this paper we use CH{sub 4} (methane) molecule, which has four hydrogen atoms at the corners of a regular tetrahedron, and a carbon atom at the center of the tetrahedron. The four hydrogen atoms in CH{sub 4} are permuted by the action of the symmetry group and this action fixes the carbon atom. At each of the 5 vertices, we assign three unit vectors, called the standard basis vectors in directions of the three edges which are joined to the vertex. The symmetry group G of the molecules permutes the 15 standard basis vectors, so we may regard Q{sup 15} as a GG By expressing Q{sup 15} as a direct sum of irreducible GG-modules, the problem of finding the normal modes of vibration is reduced to that of computing the eigenvectors of some small matrices.
Complex-wide representation of material packaged in 3013 containers
Narlesky, Joshua E.; Peppers, Larry G.; Friday, Gary P.
2009-06-01
The DOE sites packaging plutonium oxide materials packaged according to Department of Energy 3013 Standard (DOE-STD-3013) are responsible for ensuring that the materials are represented by one or more samples in the Materials Identification and Surveillance (MIS) program. The sites categorized most of the materials into process groups, and the remaining materials were characterized, based on the prompt gamma analysis results. The sites issued documents to identify the relationships between the materials packaged in 3013 containers and representative materials in the MIS program. These Represented documents were then reviewed and concurred with by the MIS Working Group. However, these documents were developed uniquely at each site and were issued before completion of sample characterization, small-scale experiments, and prompt gamma analysis, which provided more detailed information about the chemical impurities and the behavior of the material in storage. Therefore, based on the most recent data, relationships between the materials packaged in 3013 containers and representative materials in the MIS program been revised. With the prompt gamma analysis completed for Hanford, Rocky Flats, and Savannah River Site 3013 containers, MIS items have been assigned to the 3013 containers for which representation is based on the prompt gamma analysis results. With the revised relationships and the prompt gamma analysis results, a Master Represented table has been compiled to document the linkages between each 3013 container packaged to date and its representative MIS items. This table provides an important link between the Integrated Surveillance Program database, which contains information about each 3013 container to the MIS items database, which contains the characterization, prompt gamma data, and storage behavior data from shelf-life experiments for the representative MIS items.
Complex-wide representation of material packaged in 3013 containers
Narlesky, Joshua E.; Peppers, Larry G.; Friday, Gary P.
2009-06-01
The DOE sites packaging plutonium oxide materials packaged according to Department of Energy 3013 Standard (DOE-STD-3013) are responsible for ensuring that the materials are represented by one or more samples in the Materials Identification and Surveillance (MIS) program. The sites categorized most of the materials into process groups, and the remaining materials were characterized, based on the prompt gamma analysis results. The sites issued documents to identify the relationships between the materials packaged in 3013 containers and representative materials in the MIS program. These “Represented” documents were then reviewed and concurred with by the MIS Working Group. However, these documents were developed uniquely at each site and were issued before completion of sample characterization, small-scale experiments, and prompt gamma analysis, which provided more detailed information about the chemical impurities and the behavior of the material in storage. Therefore, based on the most recent data, relationships between the materials packaged in 3013 containers and representative materials in the MIS program been revised. With the prompt gamma analysis completed for Hanford, Rocky Flats, and Savannah River Site 3013 containers, MIS items have been assigned to the 3013 containers for which representation is based on the prompt gamma analysis results. With the revised relationships and the prompt gamma analysis results, a Master “Represented” table has been compiled to document the linkages between each 3013 container packaged to date and its representative MIS items. This table provides an important link between the Integrated Surveillance Program database, which contains information about each 3013 container to the MIS items database, which contains the characterization, prompt gamma data, and storage behavior data from shelf-life experiments for the representative MIS items.
Baykara, N. A.; Guervit, Ercan; Demiralp, Metin
2012-12-10
In this work a study on finite dimensional matrix approximations to products of quantum mechanical operators is conducted. It is emphasized that the matrix representation of the product of two operators is equal to the product of the matrix representation of each of the operators when all the fluctuation terms are ignored. The calculation of the elements of the matrices corresponding to the matrix representation of various operators, based on three terms recursive relation is defined. Finally it is shown that the approximation quality depends on the choice of higher values of n, namely the dimension of Hilbert space.
Notes on power of normality tests of error terms in regression models
Střelec, Luboš
2015-03-10
Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.
Online Support Vector Regression with Varying Parameters for Time-Dependent Data
Omitaomu, Olufemi A; Jeong, Myong K; Badiru, Adedeji B
2011-01-01
Support vector regression (SVR) is a machine learning technique that continues to receive interest in several domains including manufacturing, engineering, and medicine. In order to extend its application to problems in which datasets arrive constantly and in which batch processing of the datasets is infeasible or expensive, an accurate online support vector regression (AOSVR) technique was proposed. The AOSVR technique efficiently updates a trained SVR function whenever a sample is added to or removed from the training set without retraining the entire training data. However, the AOSVR technique assumes that the new samples and the training samples are of the same characteristics; hence, the same value of SVR parameters is used for training and prediction. This assumption is not applicable to data samples that are inherently noisy and non-stationary such as sensor data. As a result, we propose Accurate On-line Support Vector Regression with Varying Parameters (AOSVR-VP) that uses varying SVR parameters rather than fixed SVR parameters, and hence accounts for the variability that may exist in the samples. To accomplish this objective, we also propose a generalized weight function to automatically update the weights of SVR parameters in on-line monitoring applications. The proposed function allows for lower and upper bounds for SVR parameters. We tested our proposed approach and compared results with the conventional AOSVR approach using two benchmark time series data and sensor data from nuclear power plant. The results show that using varying SVR parameters is more applicable to time dependent data.
U.S. Department of Energy (DOE) all webpages (Extended Search)
/01/2012 Page 1 of 10 Waste Isolation Pilot Plant Carlsbad, New Mexico REPRESENTATIONS, CERTIFICATIONS, AND NOTICES APPLICABLE TO OFFERS IN EXCESS OF $25,000 Seller's authorized signature is required in the space provided at the bottom of this page. The representations and certifications shall apply based on the dollar value of this offer and the specific solicitation provisions and instructions contained in this request for proposal. Section Page 1. Taxpayer Identification 2 2. Previous
Quantization maps, algebra representation, and non-commutative Fourier transform for Lie groups
Guedes, Carlos; Oriti, Daniele; Raasakka, Matti; LIPN, Institut Galile, Universit Paris-Nord, 99, av. Clement, 93430 Villetaneuse
2013-08-15
The phase space given by the cotangent bundle of a Lie group appears in the context of several models for physical systems. A representation for the quantum system in terms of non-commutative functions on the (dual) Lie algebra, and a generalized notion of (non-commutative) Fourier transform, different from standard harmonic analysis, has been recently developed, and found several applications, especially in the quantum gravity literature. We show that this algebra representation can be defined on the sole basis of a quantization map of the classical Poisson algebra, and identify the conditions for its existence. In particular, the corresponding non-commutative star-product carried by this representation is obtained directly from the quantization map via deformation quantization. We then clarify under which conditions a unitary intertwiner between such algebra representation and the usual group representation can be constructed giving rise to the non-commutative plane waves and consequently, the non-commutative Fourier transform. The compact groups U(1) and SU(2) are considered for different choices of quantization maps, such as the symmetric and the Duflo map, and we exhibit the corresponding star-products, algebra representations, and non-commutative plane waves.
Christensen, N.C.; Emery, J.D.; Smith, M.L.
1985-04-29
A system converts from the boundary representation of an object to the constructive solid geometry representation thereof. The system converts the boundary representation of the object into elemental atomic geometrical units or I-bodies which are in the shape of stock primitives or regularized intersections of stock primitives. These elemental atomic geometrical units are then represented in symbolic form. The symbolic representations of the elemental atomic geometrical units are then assembled heuristically to form a constructive solid geometry representation of the object usable for manufacturing thereof. Artificial intelligence is used to determine the best constructive solid geometry representation from the boundary representation of the object. Heuristic criteria are adapted to the manufacturing environment for which the device is to be utilized. The surface finish, tolerance, and other information associated with each surface of the boundary representation of the object are mapped onto the constructive solid geometry representation of the object to produce an enhanced solid geometry representation, particularly useful for computer-aided manufacture of the object. 19 figs.
Christensen, Noel C.; Emery, James D.; Smith, Maurice L.
1988-04-05
A system converts from the boundary representation of an object to the constructive solid geometry representation thereof. The system converts the boundary representation of the object into elemental atomic geometrical units or I-bodies which are in the shape of stock primitives or regularized intersections of stock primitives. These elemental atomic geometrical units are then represented in symbolic form. The symbolic representations of the elemental atomic geometrical units are then assembled heuristically to form a constructive solid geometry representation of the object usable for manufacturing thereof. Artificial intelligence is used to determine the best constructive solid geometry representation from the boundary representation of the object. Heuristic criteria are adapted to the manufacturing environment for which the device is to be utilized. The surface finish, tolerance, and other information associated with each surface of the boundary representation of the object are mapped onto the constructive solid geometry representation of the object to produce an enhanced solid geometry representation, particularly useful for computer-aided manufacture of the object.
Gupta, N
2008-04-22
3013 containers are designed in accordance with the DOE-STD-3013-2004. These containers are qualified to store plutonium (Pu) bearing materials such as PuO2 for 50 years. DOT shipping packages such as the 9975 are used to store the 3013 containers in the K-Area Material Storage (KAMS) facility at Savannah River Site (SRS). DOE-STD-3013-2004 requires that a comprehensive surveillance program be set up to ensure that the 3013 container design parameters are not violated during the long term storage. To ensure structural integrity of the 3013 containers, thermal analyses using finite element models were performed to predict the contents and component temperatures for different but well defined parameters such as storage ambient temperature, PuO{sub 2} density, fill heights, weights, and thermal loading. Interpolation is normally used to calculate temperatures if the actual parameter values are different from the analyzed values. A statistical analysis technique using regression methods is proposed to develop simple polynomial relations to predict temperatures for the actual parameter values found in the containers. The analysis shows that regression analysis is a powerful tool to develop simple relations to assess component temperatures.
Induced representations of tensors and spinors of any rank in the Stueckelberg-Horwitz-Piron theory
Horwitz, Lawrence P.; Zeilig-Hess, Meir
2015-09-15
We show that a modification of Wigner’s induced representation for the description of a relativistic particle with spin can be used to construct spinors and tensors of arbitrary rank, with invariant decomposition over angular momentum. In particular, scalar and vector fields, as well as the representations of their transformations, are constructed. The method that is developed here admits the construction of wave packets and states of a many body relativistic system with definite total angular momentum. Furthermore, a Pauli-Lubanski operator is constructed on the orbit of the induced representation which provides a Casimir operator for the Poincaré group and which contains the physical intrinsic angular momentum of the particle covariantly.
Using Focused Regression for Accurate Time-Constrained Scaling of Scientific Applications
Barnes, B; Garren, J; Lowenthal, D; Reeves, J; de Supinski, B; Schulz, M; Rountree, B
2010-01-28
Many large-scale clusters now have hundreds of thousands of processors, and processor counts will be over one million within a few years. Computational scientists must scale their applications to exploit these new clusters. Time-constrained scaling, which is often used, tries to hold total execution time constant while increasing the problem size along with the processor count. However, complex interactions between parameters, the processor count, and execution time complicate determining the input parameters that achieve this goal. In this paper we develop a novel gray-box, focused median prediction errors are less than 13%. regression-based approach that assists the computational scientist with maintaining constant run time on increasing processor counts. Combining application-level information from a small set of training runs, our approach allows prediction of the input parameters that result in similar per-processor execution time at larger scales. Our experimental validation across seven applications showed that median prediction errors are less than 13%.
STANDARDIZING TYPE Ia SUPERNOVA ABSOLUTE MAGNITUDES USING GAUSSIAN PROCESS DATA REGRESSION
Kim, A. G.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Nordin, J.; Thomas, R. C.; Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J.; Baltay, C.; Buton, C.; Kerschhaggl, M.; Kowalski, M.; Chotard, N.; Copin, Y.; Gangler, E.; and others
2013-04-01
We present a novel class of models for Type Ia supernova time-evolving spectral energy distributions (SEDs) and absolute magnitudes: they are each modeled as stochastic functions described by Gaussian processes. The values of the SED and absolute magnitudes are defined through well-defined regression prescriptions, so that data directly inform the models. As a proof of concept, we implement a model for synthetic photometry built from the spectrophotometric time series from the Nearby Supernova Factory. Absolute magnitudes at peak B brightness are calibrated to 0.13 mag in the g band and to as low as 0.09 mag in the z = 0.25 blueshifted i band, where the dispersion includes contributions from measurement uncertainties and peculiar velocities. The methodology can be applied to spectrophotometric time series of supernovae that span a range of redshifts to simultaneously standardize supernovae together with fitting cosmological parameters.
Harlim, John; Mahdi, Adam; Majda, Andrew J.
2014-01-15
A central issue in contemporary science is the development of nonlinear data driven statisticaldynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partial noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (eastwest) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model.
Matrix elements for type 1 unitary irreducible representations of the Lie superalgebra gl(m|n)
Gould, Mark D.; Isaac, Phillip S.; Werry, Jason L.
2014-01-15
Using our recent results on eigenvalues of invariants associated to the Lie superalgebra gl(m|n), we use characteristic identities to derive explicit matrix element formulae for all gl(m|n) generators, particularly non-elementary generators, on finite dimensional type 1 unitary irreducible representations. We compare our results with existing works that deal with only subsets of the class of type 1 unitary representations, all of which only present explicit matrix elements for elementary generators. Our work therefore provides an important extension to existing methods, and thus highlights the strength of our techniques which exploit the characteristic identities.
Real-space quadrature: A convenient, efficient representation for multipole expansions
Rogers, David M.
2015-02-21
Multipoles are central to the theory and modeling of polarizable and nonpolarizable molecular electrostatics. This has made a representation in terms of point charges a highly sought after goal, since rotation of multipoles is a bottleneck in molecular dynamics implementations. All known point charge representations are orders of magnitude less efficient than spherical harmonics due to either using too many fixed charge locations or due to nonlinear fitting of fewer charge locations. We present the first complete solution to this problem—completely replacing spherical harmonic basis functions by a dramatically simpler set of weights associated to fixed, discrete points on a sphere. This representation is shown to be space optimal. It reduces the spherical harmonic decomposition of Poisson’s operator to pairwise summations over the point set. As a corollary, we also shows exact quadrature-based formulas for contraction over trace-free supersymmetric 3D tensors. Moreover, multiplication of spherical harmonic basis functions translates to a direct product in this representation.
Timing of representation on key panels not great | OSTI, US Dept of Energy
Office of Scientific and Technical Information (OSTI)
Office of Scientific and Technical Information Timing of representation on key panels not great Back to the OSTI News Listing for 2006 (Knoxville News-Sentinel) Tennessee has a U.S. senator and member of Congress on the committees that control federal spending, but the timing of that tag team arrangement is not so good. ...12/22 [Registration Required]
Mandel, Kaisey S.; Kirshner, Robert P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Foley, Ryan J., E-mail: kmandel@cfa.harvard.edu [Astronomy Department, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801 (United States)
2014-12-20
We investigate the statistical dependence of the peak intrinsic colors of Type Ia supernovae (SNe Ia) on their expansion velocities at maximum light, measured from the Si II ?6355 spectral feature. We construct a new hierarchical Bayesian regression model, accounting for the random effects of intrinsic scatter, measurement error, and reddening by host galaxy dust, and implement a Gibbs sampler and deviance information criteria to estimate the correlation. The method is applied to the apparent colors from BVRI light curves and Si II velocity data for 79 nearby SNe Ia. The apparent color distributions of high-velocity (HV) and normal velocity (NV) supernovae exhibit significant discrepancies for B V and B R, but not other colors. Hence, they are likely due to intrinsic color differences originating in the B band, rather than dust reddening. The mean intrinsic B V and B R color differences between HV and NV groups are 0.06 0.02 and 0.09 0.02 mag, respectively. A linear model finds significant slopes of 0.021 0.006 and 0.030 0.009 mag (10{sup 3} km s{sup 1}){sup 1} for intrinsic B V and B R colors versus velocity, respectively. Because the ejecta velocity distribution is skewed toward high velocities, these effects imply non-Gaussian intrinsic color distributions with skewness up to +0.3. Accounting for the intrinsic-color-velocity correlation results in corrections to A{sub V} extinction estimates as large as 0.12 mag for HV SNe Ia and +0.06 mag for NV events. Velocity measurements from SN Ia spectra have the potential to diminish systematic errors from the confounding of intrinsic colors and dust reddening affecting supernova distances.
Functional Wigner representation of quantum dynamics of Bose-Einstein condensate
Opanchuk, B.; Drummond, P. D.
2013-04-15
We develop a method of simulating the full quantum field dynamics of multi-mode multi-component Bose-Einstein condensates in a trap. We use the truncated Wigner representation to obtain a probabilistic theory that can be sampled. This method produces c-number stochastic equations which may be solved using conventional stochastic methods. The technique is valid for large mode occupation numbers. We give a detailed derivation of methods of functional Wigner representation appropriate for quantum fields. Our approach describes spatial evolution of spinor components and properly accounts for nonlinear losses. Such techniques are applicable to calculating the leading quantum corrections, including effects such as quantum squeezing, entanglement, EPR correlations, and interactions with engineered nonlinear reservoirs. By using a consistent expansion in the inverse density, we are able to explain an inconsistency in the nonlinear loss equations found by earlier authors.
Unitary irreducible representations of SL(2,C) in discrete and continuous SU(1,1) bases
Conrady, Florian; Hnybida, Jeff
2011-01-15
We derive the matrix elements of generators of unitary irreducible representations of SL(2,C) with respect to basis states arising from a decomposition into irreducible representations of SU(1,1). This is done with regard to a discrete basis diagonalized by J{sup 3} and a continuous basis diagonalized by K{sup 1}, and for both the discrete and continuous series of SU(1,1). For completeness, we also treat the more conventional SU(2) decomposition as a fifth case. The derivation proceeds in a functional/differential framework and exploits the fact that state functions and differential operators have a similar structure in all five cases. The states are defined explicitly and related to SU(1,1) and SU(2) matrix elements.
Integration of MHD load models with circuit representations the Z generator.
Jennings, Christopher A.; Ampleford, David J.; Jones, Brent Manley; McBride, Ryan D.; Bailey, James E.; Jones, Michael C.; Gomez, Matthew Robert.; Cuneo, Michael Edward; Nakhleh, Charles; Stygar, William A.; Savage, Mark Edward; Wagoner, Timothy C.; Moore, James K.
2013-03-01
MHD models of imploding loads fielded on the Z accelerator are typically driven by reduced or simplified circuit representations of the generator. The performance of many of the imploding loads is critically dependent on the current and power delivered to them, so may be strongly influenced by the generators response to their implosion. Current losses diagnosed in the transmission lines approaching the load are further known to limit the energy delivery, while exhibiting some load dependence. Through comparing the convolute performance of a wide variety of short pulse Z loads we parameterize a convolute loss resistance applicable between different experiments. We incorporate this, and other current loss terms into a transmission line representation of the Z vacuum section. We then apply this model to study the current delivery to a wide variety of wire array and MagLif style liner loads.
Cohen, Scott M.
2014-06-15
We give a sufficient condition that an operator sum representation of a separable quantum channel in terms of product operators is the unique product representation for that channel, and then provide examples of such channels for any number of parties. This result has implications for efforts to determine whether or not a given separable channel can be exactly implemented by local operations and classical communication. By the Choi-Jamiolkowski isomorphism, it also translates to a condition for the uniqueness of product state ensembles representing a given quantum state. These ideas follow from considerations concerning whether or not a subspace spanned by a given set of product operators contains at least one additional product operator.
Wigner functions for noncommutative quantum mechanics: A group representation based construction
Chowdhury, S. Hasibul Hassan; Ali, S. Twareque
2015-12-15
This paper is devoted to the construction and analysis of the Wigner functions for noncommutative quantum mechanics, their marginal distributions, and star-products, following a technique developed earlier, viz, using the unitary irreducible representations of the group G{sub NC}, which is the three fold central extension of the Abelian group of ℝ{sup 4}. These representations have been exhaustively studied in earlier papers. The group G{sub NC} is identified with the kinematical symmetry group of noncommutative quantum mechanics of a system with two degrees of freedom. The Wigner functions studied here reflect different levels of non-commutativity—both the operators of position and those of momentum not commuting, the position operators not commuting and finally, the case of standard quantum mechanics, obeying the canonical commutation relations only.
Rapid production of optimal-quality reduced-resolution representations of very large databases
Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.
2001-01-01
View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.
Zavahir, J.M.; Arrillaga, J.; Watson, N.R. )
1993-07-01
The two alternative methods in current use for the transient simulation of HVdc power systems are Electromagnetic Transient Programs and State Variable Analysis. A hybrid algorithm is described in this paper which combines the two methods selecting their best features. The relative performances of conventional and hybrid algorithms are discussed. Simulation results of typical back-to back HVdc link show that the hybrid representation provides more stable, accurate and efficient solutions.
Representation of the Solar Capacity Value in the ReEDS Capacity Expansion Model: Preprint
U.S. Department of Energy (DOE) all webpages (Extended Search)
Representation of the Solar Capacity Value in the ReEDS Capacity Expansion Model Preprint Ben Sigrin, Patrick Sullivan, Eduardo Ibanez, and Robert Margolis Presented at the 40th IEEE Photovoltaic Specialists Conference (PVSC-40) Denver, Colorado June 8-13, 2014 Conference Paper NREL/CP-6A20-62015 August 2014 NOTICE The submitted manuscript has been offered by an employee of the Alliance for Sustainable Energy, LLC (Alliance), a contractor of the US Government under Contract No.
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
in section 501(c)(3) of the Internal Revenue Code of 1954 26 U.S.C. Section 501(c)(3). ... agree that it will promote the manufacture within the U.S. of products resulting ...
Madduri, Kamesh; Bader, David A.
2009-02-15
Graph-theoretic abstractions are extensively used to analyze massive data sets. Temporal data streams from socioeconomic interactions, social networking web sites, communication traffic, and scientific computing can be intuitively modeled as graphs. We present the first study of novel high-performance combinatorial techniques for analyzing large-scale information networks, encapsulating dynamic interaction data in the order of billions of entities. We present new data structures to represent dynamic interaction networks, and discuss algorithms for processing parallel insertions and deletions of edges in small-world networks. With these new approaches, we achieve an average performance rate of 25 million structural updates per second and a parallel speedup of nearly28 on a 64-way Sun UltraSPARC T2 multicore processor, for insertions and deletions to a small-world network of 33.5 million vertices and 268 million edges. We also design parallel implementations of fundamental dynamic graph kernels related to connectivity and centrality queries. Our implementations are freely distributed as part of the open-source SNAP (Small-world Network Analysis and Partitioning) complex network analysis framework.
Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)
2006-10-01
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.
Impact of aerosol size representation on modeling aerosol-cloud interactions
Zhang, Y.; Easter, R. C.; Ghan, S. J.; Abdul-Razzak, H.
2002-11-07
In this study, we use a 1-D version of a climate-aerosol-chemistry model with both modal and sectional aerosol size representations to evaluate the impact of aerosol size representation on modeling aerosol-cloud interactions in shallow stratiform clouds observed during the 2nd Aerosol Characterization Experiment. Both the modal (with prognostic aerosol number and mass or prognostic aerosol number, surface area and mass, referred to as the Modal-NM and Modal-NSM) and the sectional approaches (with 12 and 36 sections) predict total number and mass for interstitial and activated particles that are generally within several percent of references from a high resolution 108-section approach.more » The modal approach with prognostic aerosol mass but diagnostic number (referred to as the Modal-M) cannot accurately predict the total particle number and surface areas, with deviations from the references ranging from 7-161%. The particle size distributions are sensitive to size representations, with normalized absolute differences of up to 12% and 37% for the 36- and 12-section approaches, and 30%, 39%, and 179% for the Modal-NSM, Modal-NM, and Modal-M, respectively. For the Modal-NSM and Modal-NM, differences from the references are primarily due to the inherent assumptions and limitations of the modal approach. In particular, they cannot resolve the abrupt size transition between the interstitial and activated aerosol fractions. For the 12- and 36-section approaches, differences are largely due to limitations of the parameterized activation for non-log-normal size distributions, plus the coarse resolution for the 12-section case. Differences are larger both with higher aerosol (i.e., less complete activation) and higher SO2 concentrations (i.e., greater modification of the initial aerosol distribution).« less
Li, Dongsheng; Khaleel, Mohammad A.; Sun, Xin; Garmestani, Hamid
2010-03-01
Statistical correlation function, including two-point function, is one of the popular methods to digitize microstructure quantitatively. This paper investigated how to represent statistical correlations using layered fast spherical harmonics expansion. A set of spherical harmonics coefficients may be used to represent the corresponding microstructures. It is applied to represent carbon nanotube composite microstructures to demonstrate how efficiently and precisely the harmonics coefficients will characterize the microstructure. This microstructure representation methodology will dramatically improve the computational efficiencies for future works in microstructure reconstruction and property prediction.
Approach of spherical harmonics to the representation of the deformed su(1,1) algebra
Fakhri, H.; Ghaneh, T.
2008-11-15
The m-shifting generators of su(2) algebra together with a pair of l-shifting ladder symmetry operators have been used in the space of all spherical harmonics Y{sub l}{sup m}({theta},{phi}) in order to introduce a new set of operators, expressing the transitions between them. It is shown that the space of spherical harmonics whose l+2m or l-2m is given presents negative and positive irreducible representations of a deformed su(1,1) algebra, respectively. These internal symmetries also suggest new algebraic methods to construct the spherical harmonics in the framework of the spectrum-generating algebras.
Augustine, C.
2011-10-01
The U.S. Department of Energy (DOE) Geothermal Technologies Program (GTP) tasked the National Renewable Energy Laboratory (NREL) with conducting the annual geothermal supply curve update. This report documents the approach taken to identify geothermal resources, determine the electrical producing potential of these resources, and estimate the levelized cost of electricity (LCOE), capital costs, and operating and maintenance costs from these geothermal resources at present and future timeframes under various GTP funding levels. Finally, this report discusses the resulting supply curve representation and how improvements can be made to future supply curve updates.
Oliveira, Joseph S.; Jones-Oliveira, Janet B.; Bailey, Colin G.; Gull, Dean W.
2008-07-01
One embodiment of the present invention includes a computer operable to represent a physical system with a graphical data structure corresponding to a matroid. The graphical data structure corresponds to a number of vertices and a number of edges that each correspond to two of the vertices. The computer is further operable to define a closed pathway arrangement with the graphical data structure and identify each different one of a number of fundamental cycles by evaluating a different respective one of the edges with a spanning tree representation. The fundamental cycles each include three or more of the vertices.
Light-front representation of chiral dynamics in peripheral transverse densities
Granados, Carlos G.; Weiss, Christian
2015-07-31
The nucleon's electromagnetic form factors are expressed in terms of the transverse densities of charge and magnetization at fixed light-front time. At peripheral transverse distances b = O(M_pi^{-1}) the densities are governed by chiral dynamics and can be calculated model-independently using chiral effective field theory (EFT). We represent the leading-order chiral EFT results for the peripheral transverse densities as overlap integrals of chiral light-front wave functions, describing the transition of the initial nucleon to soft pion-nucleon intermediate states and back. The new representation (a) explains the parametric order of the peripheral transverse densities; (b) establishes an inequality between the spin-independentmore » and -dependent densities; (c) exposes the role of pion orbital angular momentum in chiral dynamics; (d) reveals a large left-right asymmetry of the current in a transversely polarized nucleon and suggests a simple interpretation. The light-front representation enables a first-quantized, quantum-mechanical view of chiral dynamics that is fully relativistic and exactly equivalent to the second-quantized, field-theoretical formulation. It relates the charge and magnetization densities measured in low-energy elastic scattering to the generalized parton distributions probed in peripheral high-energy scattering processes. The method can be applied to nucleon form factors of other operators, e.g. the energy-momentum tensor.« less
Bryan, Frank; Dennis, John; MacCready, Parker; Whitney, Michael
2015-11-20
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation. The main computational objectives were: 1. To develop computationally efficient, but physically based, parameterizations of estuary and continental shelf mixing processes for use in an Earth System Model (CESM). 2. To develop a two-way nested regional modeling framework in order to dynamically downscale the climate response of particular coastal ocean regions and to upscale the impact of the regional coastal processes to the global climate in an Earth System Model (CESM). 3. To develop computational infrastructure to enhance the efficiency of data transfer between specific sources and destinations, i.e., a point-to-point communication capability, (used in objective 1) within POP, the ocean component of CESM.
Light-front representation of chiral dynamics in peripheral transverse densities
Granados, Carlos G.; Weiss, Christian
2015-07-31
The nucleon's electromagnetic form factors are expressed in terms of the transverse densities of charge and magnetization at fixed light-front time. At peripheral transverse distances b = O(M_pi^{-1}) the densities are governed by chiral dynamics and can be calculated model-independently using chiral effective field theory (EFT). We represent the leading-order chiral EFT results for the peripheral transverse densities as overlap integrals of chiral light-front wave functions, describing the transition of the initial nucleon to soft pion-nucleon intermediate states and back. The new representation (a) explains the parametric order of the peripheral transverse densities; (b) establishes an inequality between the spin-independent and -dependent densities; (c) exposes the role of pion orbital angular momentum in chiral dynamics; (d) reveals a large left-right asymmetry of the current in a transversely polarized nucleon and suggests a simple interpretation. The light-front representation enables a first-quantized, quantum-mechanical view of chiral dynamics that is fully relativistic and exactly equivalent to the second-quantized, field-theoretical formulation. It relates the charge and magnetization densities measured in low-energy elastic scattering to the generalized parton distributions probed in peripheral high-energy scattering processes. The method can be applied to nucleon form factors of other operators, e.g. the energy-momentum tensor.
Kowalski, Karol; Bhaskaran-Nair, Kiran; Shelton, William A.
2014-09-07
In this paper we discuss a new formalism for producing an analytic coupled-cluster (CC) Greens function that renders a highly scalable computational accurate method for producing an analytic coupled-cluster Greens function for an N-electron system by shifting the poles of similarity transformed Hamiltonians represented in N?1 and N +1 electron Hilbert spaces. Simple criteria are derived for the states in N ?1 and N + 1 electron spaces that are then corrected in the spectral resolution of the cor- responding matrix representations of the similarity transformed Hamiltonian. The accurate description of excited state processes within a Greens function formalism would be of significant importance to a number of scientific communities ranging from physics and chemistry to engineering and the biological sciences. This is because the Greens function methodology provides a direct path for not only calculating prop- erties whose underlying origins come from coupled many-body interactions but it also provides a straightforward path for calculating electron transport, response and correlation functions that allows for a direct link with experiment. As a special case of this general formulation, we discuss the application of this technique for Greens function defined by the CCSD (CC with singles and doubles) representation of the ground-state wave function.
Zhang, P; Hu, J; Tyagi, N; Mageras, G; Lee, N; Hunt, M
2014-06-01
Purpose: To develop a robust planning paradigm which incorporates a tumor regression model into the optimization process to ensure tumor coverage in head and neck radiotherapy. Methods: Simulation and weekly MR images were acquired for a group of head and neck patients to characterize tumor regression during radiotherapy. For each patient, the tumor and parotid glands were segmented on the MR images and the weekly changes were formulated with an affine transformation, where morphological shrinkage and positional changes are modeled by a scaling factor, and centroid shifts, respectively. The tumor and parotid contours were also transferred to the planning CT via rigid registration. To perform the robust planning, weekly predicted PTV and parotid structures were created by transforming the corresponding simulation structures according to the weekly affine transformation matrix averaged over patients other than him/herself. Next, robust PTV and parotid structures were generated as the union of the simulation and weekly prediction contours. In the subsequent robust optimization process, attainment of the clinical dose objectives was required for the robust PTV and parotids, as well as other organs at risk (OAR). The resulting robust plans were evaluated by looking at the weekly and total accumulated dose to the actual weekly PTV and parotid structures. The robust plan was compared with the original plan based on the planning CT to determine its potential clinical benefit. Results: For four patients, the average weekly change to tumor volume and position was ?4% and 1.2 mm laterally-posteriorly. Due to these temporal changes, the robust plans resulted in an accumulated PTV D95 that was, on average, 2.7 Gy higher than the plan created from the planning CT. OAR doses were similar. Conclusion: Integration of a tumor regression model into target delineation and plan robust optimization is feasible and may yield improved tumor coverage. Part of this research is supported by
Grider, Gary A.; Poole, Stephen W.
2015-09-01
Collective buffering and data pattern solutions are provided for storage, retrieval, and/or analysis of data in a collective parallel processing environment. For example, a method can be provided for data storage in a collective parallel processing environment. The method comprises receiving data to be written for a plurality of collective processes within a collective parallel processing environment, extracting a data pattern for the data to be written for the plurality of collective processes, generating a representation describing the data pattern, and saving the data and the representation.
Scale and the representation of human agency in the modeling of agroecosystems
Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.; Absar, Syeda Mariya; Nair, Sujithkumar Surendran; Parish, Esther S.
2015-07-17
Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capture different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.
A diabatic representation of the two lowest electronic states of Li{sub 3}
Ghassemi, Elham Nour; Larson, Jonas; Institut für Theoretische Physik, Universität zu Köln, Köln De-50937 ; Larson, Åsa
2014-04-21
Using the Multi-Reference Configuration Interaction method, the adiabatic potential energy surfaces of Li{sub 3} are computed. The two lowest electronic states are bound and exhibit a conical intersection. By fitting the calculated potential energy surfaces to the cubic E ⊗ ε Jahn-Teller model we extract the effective Jahn-Teller parameters corresponding to Li{sub 3}. These are used to set up the transformation matrix which transforms from the adiabatic to a diabatic representation. This diabatization method gives a Hamiltonian for Li{sub 3} which is free from singular non-adiabatic couplings and should be accurate for large internuclear distances, and it thereby allows for bound dynamics in the vicinity of the conical intersection to be explored.
Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang
2014-04-15
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT
Sharon Falcone Miller; Bruce G. Miller
2007-12-15
This paper compares the emissions factors for a suite of liquid biofuels (three animal fats, waste restaurant grease, pressed soybean oil, and a biodiesel produced from soybean oil) and four fossil fuels (i.e., natural gas, No. 2 fuel oil, No. 6 fuel oil, and pulverized coal) in Penn State's commercial water-tube boiler to assess their viability as fuels for green heat applications. The data were broken into two subsets, i.e., fossil fuels and biofuels. The regression model for the liquid biofuels (as a subset) did not perform well for all of the gases. In addition, the coefficient in the models showed the EPA method underestimating CO and NOx emissions. No relation could be studied for SO{sub 2} for the liquid biofuels as they contain no sulfur; however, the model showed a good relationship between the two methods for SO{sub 2} in the fossil fuels. AP-42 emissions factors for the fossil fuels were also compared to the mass balance emissions factors and EPA CFR Title 40 emissions factors. Overall, the AP-42 emissions factors for the fossil fuels did not compare well with the mass balance emissions factors or the EPA CFR Title 40 emissions factors. Regression analysis of the AP-42, EPA, and mass balance emissions factors for the fossil fuels showed a significant relationship only for CO{sub 2} and SO{sub 2}. However, the regression models underestimate the SO{sub 2} emissions by 33%. These tests illustrate the importance in performing material balances around boilers to obtain the most accurate emissions levels, especially when dealing with biofuels. The EPA emissions factors were very good at predicting the mass balance emissions factors for the fossil fuels and to a lesser degree the biofuels. While the AP-42 emissions factors and EPA CFR Title 40 emissions factors are easier to perform, especially in large, full-scale systems, this study illustrated the shortcomings of estimation techniques. 23 refs., 3 figs., 8 tabs.
TITLE AUTHORS SUBJECT SUBJECT RELATED DESCRIPTION PUBLISHER AVAILABILI...
Office of Scientific and Technical Information (OSTI)
QUANTIZATION THEORETICAL DATA The geometric picture of the star product based on its Fourier representation kernel is utilized in the evaluation of chains of star products and...
Division, Argonne National Laboratory, Argonne, Illinois 60439...
Office of Scientific and Technical Information (OSTI)
QUANTIZATION; THEORETICAL DATA The geometric picture of the star product based on its Fourier representation kernel is utilized in the evaluation of chains of star products and...
"Title","Creator/Author","Publication Date","OSTI Identifier...
Office of Scientific and Technical Information (OSTI)
THEORETICAL DATA",,"The geometric picture of the star product based on its Fourier representation kernel is utilized in the evaluation of chains of star products and...
Bou, Gwenal; Fabrycky, Daniel C.
2014-07-10
The non-resonant secular dynamics of compact planetary systems are modeled by a perturbing function that is usually expanded in eccentricity and absolute inclination with respect to the invariant plane. Here, the expressions are given in a vectorial form which naturally leads to an expansion in eccentricity and mutual inclination. The two approaches are equivalent in most cases, but the vectorial one is specially designed for those cases where an entire quasi-coplanar system tilts to a large degree. Moreover, the vectorial expressions of the Hamiltonian and of the equations of motion are slightly simpler than those given in terms of the usual elliptical elements. We also provide the secular perturbing function in vectorial form expanded in semi-major axis ratio allowing for arbitrary eccentricities and inclinations. The interaction between the equatorial bulge of a central star and its planets is also provided, as is the relativistic periapse precession of any planet induced by the central star. We illustrate the use of this representation to follow the secular oscillations of the terrestrial planets of the solar system and for Kozai cycles which may take place in exoplanetary systems.
A new subgrid-scale representation of hydrometeor fields using a multivariate PDF
Griffin, Brian M.; Larson, Vincent E.
2016-06-03
The subgrid-scale representation of hydrometeor fields is important for calculating microphysical process rates. In order to represent subgrid-scale variability, the Cloud Layers Unified By Binormals (CLUBB) parameterization uses a multivariate probability density function (PDF). In addition to vertical velocity, temperature, and moisture fields, the PDF includes hydrometeor fields. Previously, hydrometeor fields were assumed to follow a multivariate single lognormal distribution. Now, in order to better represent the distribution of hydrometeors, two new multivariate PDFs are formulated and introduced.The new PDFs represent hydrometeors using either a delta-lognormal or a delta-double-lognormal shape. The two new PDF distributions, plus the previous single lognormalmore » shape, are compared to histograms of data taken from large-eddy simulations (LESs) of a precipitating cumulus case, a drizzling stratocumulus case, and a deep convective case. Finally, the warm microphysical process rates produced by the different hydrometeor PDFs are compared to the same process rates produced by the LES.« less
Scale and the representation of human agency in the modeling of agroecosystems
Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.; Absar, Syeda Mariya; Nair, Sujithkumar Surendran; Parish, Esther S.
2015-07-17
Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less
Formulating a simplified equivalent representation of distribution circuits for PV impact studies.
Reno, Matthew J.; Broderick, Robert Joseph; Grijalva, Santiago
2013-04-01
With an increasing number of Distributed Generation (DG) being connected on the distribution system, a method for simplifying the complexity of the distribution system to an equivalent representation of the feeder is advantageous for streamlining the interconnection study process. The general characteristics of the system can be retained while reducing the modeling effort required. This report presents a method of simplifying feeders to only specified buses-of-interest. These buses-of-interest can be potential PV interconnection locations or buses where engineers want to verify a certain power quality. The equations and methodology are presented with mathematical proofs of the equivalence of the circuit reduction method. An example 15-bus feeder is shown with the parameters and intermediate example reduction steps to simplify the circuit to 4 buses. The reduced feeder is simulated using PowerWorld Simulator to validate that those buses operate with the same characteristics as the original circuit. Validation of the method is also performed for snapshot and time-series simulations with variable load and solar energy output data to validate the equivalent performance of the reduced circuit with the interconnection of PV.
Jahandideh, Sepideh Jahandideh, Samad; Asadabadi, Ebrahim Barzegari; Askarian, Mehrdad; Movahedi, Mohammad Mehdi; Hosseini, Somayyeh; Jahandideh, Mina
2009-11-15
Prediction of the amount of hospital waste production will be helpful in the storage, transportation and disposal of hospital waste management. Based on this fact, two predictor models including artificial neural networks (ANNs) and multiple linear regression (MLR) were applied to predict the rate of medical waste generation totally and in different types of sharp, infectious and general. In this study, a 5-fold cross-validation procedure on a database containing total of 50 hospitals of Fars province (Iran) were used to verify the performance of the models. Three performance measures including MAR, RMSE and R{sup 2} were used to evaluate performance of models. The MLR as a conventional model obtained poor prediction performance measure values. However, MLR distinguished hospital capacity and bed occupancy as more significant parameters. On the other hand, ANNs as a more powerful model, which has not been introduced in predicting rate of medical waste generation, showed high performance measure values, especially 0.99 value of R{sup 2} confirming the good fit of the data. Such satisfactory results could be attributed to the non-linear nature of ANNs in problem solving which provides the opportunity for relating independent variables to dependent ones non-linearly. In conclusion, the obtained results showed that our ANN-based model approach is very promising and may play a useful role in developing a better cost-effective strategy for waste management in future.
Zambrano, Eduardo; ulc, Miroslav; Van?ek, Ji?
2013-08-07
Time-resolved electronic spectra can be obtained as the Fourier transform of a special type of time correlation function known as fidelity amplitude, which, in turn, can be evaluated approximately and efficiently with the dephasing representation. Here we improve both the accuracy of this approximationwith an amplitude correction derived from the phase-space propagatorand its efficiencywith an improved cellular scheme employing inverse Weierstrass transform and optimal scaling of the cell size. We demonstrate the advantages of the new methodology by computing dispersed time-resolved stimulated emission spectra in the harmonic potential, pyrazine, and the NCO molecule. In contrast, we show that in strongly chaotic systems such as the quartic oscillator the original dephasing representation is more appropriate than either the cellular or prefactor-corrected methods.
Benioff, Paul
2009-01-01
Tmore » his work is based on the field of reference frames based on quantum representations of real and complex numbers described in other work. Here frame domains are expanded to include space and time lattices. Strings of qukits are described as hybrid systems as they are both mathematical and physical systems. As mathematical systems they represent numbers. As physical systems in each frame the strings have a discrete Schrodinger dynamics on the lattices.he frame field has an iterative structure such that the contents of a stage j frame have images in a stage j - 1 (parent) frame. A discussion of parent frame images includes the proposal that points of stage j frame lattices have images as hybrid systems in parent frames.he resulting association of energy with images of lattice point locations, as hybrid systems states, is discussed. Representations and images of other physical systems in the different frames are also described.« less
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
THMC Modeling of EGS Reservoirs - Continuum through Discontinuum Representations: Capturing Reservoir Stimulation, Evolution and Induced Seismicity Derek Elsworth Pennsylvania State University Chemistry, Reservoir and Integrated Models Project Officer: Lauren Boyd Total Project Funding: $1.11M + $0.5M = $1.61M April 23, 2013 This presentation does not contain any proprietary confidential, or otherwise restricted information. 2 | US DOE Geothermal Office eere.energy.gov Challenges * Prospecting
Next Generation Models for Storage and Representation of Microbial Biological Annotation
Quest, Daniel J; Land, Miriam L; Brettin, Thomas S; Cottingham, Robert W
2010-01-01
Background Traditional genome annotation systems were developed in a very different computing era, one where the World Wide Web was just emerging. Consequently, these systems are built as centralized black boxes focused on generating high quality annotation submissions to GenBank/EMBL supported by expert manual curation. The exponential growth of sequence data drives a growing need for increasingly higher quality and automatically generated annotation. Typical annotation pipelines utilize traditional database technologies, clustered computing resources, Perl, C, and UNIX file systems to process raw sequence data, identify genes, and predict and categorize gene function. These technologies tightly couple the annotation software system to hardware and third party software (e.g. relational database systems and schemas). This makes annotation systems hard to reproduce, inflexible to modification over time, difficult to assess, difficult to partition across multiple geographic sites, and difficult to understand for those who are not domain experts. These systems are not readily open to scrutiny and therefore not scientifically tractable. The advent of Semantic Web standards such as Resource Description Framework (RDF) and OWL Web Ontology Language (OWL) enables us to construct systems that address these challenges in a new comprehensive way. Results Here, we develop a framework for linking traditional data to OWL-based ontologies in genome annotation. We show how data standards can decouple hardware and third party software tools from annotation pipelines, thereby making annotation pipelines easier to reproduce and assess. An illustrative example shows how TURTLE (Terse RDF Triple Language) can be used as a human readable, but also semantically-aware, equivalent to GenBank/EMBL files. Conclusions The power of this approach lies in its ability to assemble annotation data from multiple databases across multiple locations into a representation that is understandable to
Response and representation of ductile damage under varying shock loading conditions in tantalum
Bronkhorst, C. A.; Gray, III, G. T.; Addessio, F. L.; Livescu, V.; Bourne, N. K.; MacDonald, S. A.; Withers, P. J.
2016-02-28
The response of polycrystalline metals, which possess adequate mechanisms for plastic deformation under extreme loading conditions, is often accompanied by the formation of pores within the structure of the material. This large deformation process is broadly identified as progressive with nucleation, growth, coalescence, and failure the physical path taken over very short periods of time. These are well known to be complex processes strongly influenced by microstructure, loading path, and the loading profile, which remains a significant challenge to represent and predict numerically. In the current study, the influence of loading path on the damage evolution in high-purity tantalum ismore » presented. Tantalum samples were shock loaded to three different peak shock stresses using both symmetric impact, and two different composite flyer plate configurations such that upon unloading the three samples displayed nearly identical “pull-back” signals as measured via rear-surface velocimetry. While the “pull-back” signals observed were found to be similar in magnitude, the sample loaded to the highest peak stress nucleated a connected field of ductile fracture which resulted in complete separation, while the two lower peak stresses resulted in incipient damage. The damage evolution in the “soft” recovered tantalum samples was quantified using optical metallography, electron-back-scatter diffraction, and tomography. These experiments are examined numerically through the use of a model for shock-induced porosity evolution during damage. The model is shown to describe the response of the tantalum reasonably well under strongly loaded conditions but less well in the nucleation dominated regime. As a result, numerical results are also presented as a function of computational mesh density and discussed in the context of improved representation of the influence of material structure upon macro-scale models of ductile damage.« less
De Sapio, Vincent
2010-09-01
The analysis of spacecraft kinematics and dynamics requires an efficient scheme for spatial representation. While the representation of displacement in three dimensional Euclidean space is straightforward, orientation in three dimensions poses particular challenges. The unit quaternion provides an approach that mitigates many of the problems intrinsic in other representation approaches, including the ill-conditioning that arises from computing many successive rotations. This report focuses on the computational utility of unit quaternions and their application to the reconstruction of re-entry vehicle (RV) motion history from sensor data. To this end they will be used in conjunction with other kinematic and data processing techniques. We will present a numerical implementation for the reconstruction of RV motion solely from gyroscope and accelerometer data. This will make use of unit quaternions due to their numerical efficacy in dealing with the composition of many incremental rotations over a time series. In addition to signal processing and data conditioning procedures, algorithms for numerical quaternion-based integration of gyroscope data will be addressed, as well as accelerometer triangulation and integration to yield RV trajectory. Actual processed flight data will be presented to demonstrate the implementation of these methods.
Aerts, Diederik; Sassoli de Bianchi, Massimiliano
2014-12-15
A generalized Bloch sphere, in which the states of a quantum entity of arbitrary dimension are geometrically represented, is investigated and further extended, to also incorporate the measurements. This extended representation constitutes a general solution to the measurement problem, inasmuch it allows to derive the Born rule as an average over hidden-variables, describing not the state of the quantum entity, but its interaction with the measuring system. According to this modelization, a quantum measurement is to be understood, in general, as a tripartite process, formed by an initial deterministic decoherence-like process, a subsequent indeterministic collapse-like process, and a final deterministic purification-like process. We also show that quantum probabilities can be generally interpreted as the probabilities of a first-order non-classical theory, describing situations of maximal lack of knowledge regarding the process of actualization of potential interactions, during a measurement. - Highlights: • An extended Bloch representation of quantum measurements is given. • Quantum measurements are explained in terms of hidden-measurement interactions. • Quantum measurements are explained as tripartite processes. • The Born rule results from a universal average, over all possible measurement processes.
Niyogi, Devdutta S.
2013-06-07
The CLASIC experiment was conducted over the US southern great plains (SGP) in June 2007 with an objective to lead an enhanced understanding of the cumulus convection particularly as it relates to land surface conditions. This project was design to help assist with understanding the overall improvement of land atmosphere convection initiation representation of which is important for global and regional models. The study helped address one of the critical documented deficiency in the models central to the ARM objectives for cumulus convection initiation and particularly under summer time conditions. This project was guided by the scientific question building on the CLASIC theme questions: What is the effect of improved land surface representation on the ability of coupled models to simulate cumulus and convection initiation? The focus was on the US Southern Great Plains region. Since the CLASIC period was anomalously wet the strategy has been to use other periods and domains to develop the comparative assessment for the CLASIC data period, and to understand the mechanisms of the anomalous wet conditions on the tropical systems and convection over land. The data periods include the IHOP 2002 field experiment that was over roughly same domain as the CLASIC in the SGP, and some of the DOE funded Ameriflux datasets.
Tao, Liang; McCurdy, C.W.; Rescigno, T.N.
2008-11-25
We show how to combine finite elements and the discrete variable representation in prolate spheroidal coordinates to develop a grid-based approach for quantum mechanical studies involving diatomic molecular targets. Prolate spheroidal coordinates are a natural choice for diatomic systems and have been used previously in a variety of bound-state applications. The use of exterior complex scaling in the present implementation allows for a transparently simple way of enforcing Coulomb boundary conditions and therefore straightforward application to electronic continuum problems. Illustrative examples involving the bound and continuum states of H2+, as well as the calculation of photoionization cross sections, show that the speed and accuracy of the present approach offer distinct advantages over methods based on single-center expansions.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Joint entry Operating Systems Research 1527 16th NW 5 Washington, DC 20036 USA Trammell ... improvements in data access performance for today's parallel computing applications. ...
Micro Kernel Benchmark for Evaluating Computer Performance
Energy Science and Technology Software Center (OSTI)
2007-04-06
Crystal_mk is a micro benchmark that LLNL will use to evaluate vendor's software(e.g. compiler) and hardware(e.g. processor speed, memory design).
Perturbation kernels for generalized seismological data functionals...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Authors: Chen, Po., Jordan, T.H., Lee, E. In seismic waveform analysis and inversion, data ... The generalized seismological data functionals (GSDF) of Gee & Jordan quantify waveform ...
Robotics - Intelligence Kernel - Energy Innovation Portal
U.S. Department of Energy (DOE) all webpages (Extended Search)
on the defined contour path thus reducing the need for continuous attention by the operator. Benefits - Reduces overlap andor skipping, - Increases safety, efficiency, accuracy, - ...
PERI Auto-tuning Memory Intensive Kernels
U.S. Department of Energy (DOE) all webpages (Extended Search)
broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. ...
Ceccato, Alessandro; Frezzato, Diego; Nicolini, Paolo
2015-12-14
In this work, we deal with general reactive systems involving N species and M elementary reactions under applicability of the mass-action law. Starting from the dynamic variables introduced in two previous works [P. Nicolini and D. Frezzato, J. Chem. Phys. 138(23), 234101 (2013); 138(23), 234102 (2013)], we turn to a new representation in which the system state is specified in a (N × M){sup 2}-dimensional space by a point whose coordinates have physical dimension of inverse-of-time. By adopting hyper-spherical coordinates (a set of dimensionless “angular” variables and a single “radial” one with physical dimension of inverse-of-time) and by examining the properties of their evolution law both formally and numerically on model kinetic schemes, we show that the system evolves towards the equilibrium as being attracted by a sequence of fixed subspaces (one at a time) each associated with a compact domain of the concentration space. Thus, we point out that also for general non-linear kinetics there exist fixed “objects” on the global scale, although they are conceived in such an abstract and extended space. Moreover, we propose a link between the persistence of the belonging of a trajectory to such subspaces and the closeness to the slow manifold which would be perceived by looking at the bundling of the trajectories in the concentration space.
Wu Yinghua; Herman, Michael F.
2006-10-21
A justification is given for the validity of a nonadiabatic surface hopping Herman-Kluk (HK) semiclassical initial value representation (SC-IVR) method. The method is based on a propagator that combines the single surface HK SC-IVR method [J. Chem. Phys. 84, 326 (1986)] and Herman's nonadiabatic semiclassical surface hopping theory [J. Chem. Phys. 103, 8081 (1995)], which was originally developed using the primitive semiclassical Van Vleck propagator. We show that the nonadiabatic HK SC-IVR propagator satisfies the time-dependent Schroedinger equation to the first order of ({Dirac_h}/2{pi}) and the error is O(({Dirac_h}/2{pi}){sup 2}). As a required lemma, we show that the stationary phase approximation, under current assumptions, has an error term ({Dirac_h}/2{pi}){sup 1} order higher than the leading term. Our derivation suggests some changes to the previous development, and it is shown that the numerical accuracy in applications to Tully's three model systems in low energies is improved.
Huang, Hsin-Yuan; Hall, Alex
2013-07-24
Stratocumulus and shallow cumulus clouds in subtropical oceanic regions (e.g., Southeast Pacific) cover thousands of square kilometers and play a key role in regulating global climate (e.g., Klein and Hartmann, 1993). Numerical modeling is an essential tool to study these clouds in regional and global systems, but the current generation of climate and weather models has difficulties in representing them in a realistic way (e.g., Siebesma et al., 2004; Stevens et al., 2007; Teixeira et al., 2011). While numerical models resolve the large-scale flow, subgrid-scale parameterizations are needed to estimate small-scale properties (e.g. boundary layer turbulence and convection, clouds, radiation), which have significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. To represent the contribution of these fine-scale processes to the resolved scale, climate models use various parameterizations, which are the main pieces in the model that contribute to the low clouds dynamics and therefore are the major sources of errors or approximations in their representation. In this project, we aim to 1) improve our understanding of the physical processes in thermal circulation and cloud formation, 2) examine the performance and sensitivity of various parameterizations in the regional weather model (Weather Research and Forecasting model; WRF), and 3) develop, implement, and evaluate the advanced boundary layer parameterization in the regional model to better represent stratocumulus, shallow cumulus, and their transition. Thus, this project includes three major corresponding studies. We find that the mean diurnal cycle is sensitive to model domain in ways that reveal the existence of different contributions originating from the Southeast Pacific land-masses. The experiments suggest that diurnal variations in circulations and thermal structures over this region are influenced by convection over the Peruvian sector of the Andes cordillera, while
Tilmes, Simone; Lamarque, Jean -Francois; Emmons, Louisa K.; Kinnison, Doug E.; Marsh, Dan; Garcia, Rolando R.; Smith, Anne K.; Neely, Ryan R.; Conley, Andrew; Vitt, Francis; et al
2016-05-20
The Community Earth System Model (CESM1) CAM4-chem has been used to perform the Chemistry Climate Model Initiative (CCMI) reference and sensitivity simulations. In this model, the Community Atmospheric Model version 4 (CAM4) is fully coupled to tropospheric and stratospheric chemistry. Details and specifics of each configuration, including new developments and improvements are described. CESM1 CAM4-chem is a low-top model that reaches up to approximately 40 km and uses a horizontal resolution of 1.9° latitude and 2.5° longitude. For the specified dynamics experiments, the model is nudged to Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalysis. We summarize the performance ofmore » the three reference simulations suggested by CCMI, with a focus on the last 15 years of the simulation when most observations are available. Comparisons with selected data sets are employed to demonstrate the general performance of the model. We highlight new data sets that are suited for multi-model evaluation studies. Most important improvements of the model are the treatment of stratospheric aerosols and the corresponding adjustments for radiation and optics, the updated chemistry scheme including improved polar chemistry and stratospheric dynamics and improved dry deposition rates. These updates lead to a very good representation of tropospheric ozone within 20 % of values from available observations for most regions. In particular, the trend and magnitude of surface ozone is much improved compared to earlier versions of the model. Furthermore, stratospheric column ozone of the Southern Hemisphere in winter and spring is reasonably well represented. In conclusion, all experiments still underestimate CO most significantly in Northern Hemisphere spring and show a significant underestimation of hydrocarbons based on surface observations.« less
No, H.C.; Kazimi, M.S.
1983-03-01
This work involves the development of physical models for the constitutive relations of a two-fluid, three-dimensional sodium boiling code, THERMIT-6S. The code is equipped with a fluid conduction model, a fuel pin model, and a subassembly wall model suitable for stimulating LMFBR transient events. Mathematically rigorous derivations of time-volume averaged conservation equations are used to establish the differential equations of THERMIT-6S. These equations are then discretized in a manner identical to the original THERMIT code. A virtual mass term is incorporated in THERMIT-6S to solve the ill-posed problem. Based on a simplified flow regime, namely cocurrent annular flow, constitutive relations for two-phase flow of sodium are derived. The wall heat transfer coefficient is based on momentum-heat transfer analogy and a logarithmic law for liquid film velocity distribution. A broad literature review is given for two-phase friction factors. It is concluded that entrainment can account for some of the discrepancies in the literature. Mass and energy exchanges are modelled by generalization of the turbulent flux concept. Interfacial drag coefficients are derived for annular flows with entrainment. Code assessment is performed by simulating three experiments for low flow-high power accidents and one experiment for low flow/low power accidents in the LMFBR. While the numerical results for pre-dryout are in good agreement with the data, those for post-dryout reveal the need for improvement of the physical models. The benefits of two-dimensional non-equilibrium representation of sodium boiling are studied.
Morrison, PI Hugh
2012-09-21
This is the first meeting of the whole new GEWEX (Global Energy and Water Cycle Experiment) Atmospheric System Study (GASS) project that has been formed from the merger of the GEWEX Cloud System Study (GCSS) Project and the GEWEX Atmospheric Boundary Layer Studies (GABLS). As such, this meeting will play a major role in energizing GEWEX work in the area of atmospheric parameterizations of clouds, convection, stable boundary layers, and aerosol-cloud interactions for the numerical models used for weather and climate projections at both global and regional scales. The representation of these processes in models is crucial to GEWEX goals of improved prediction of the energy and water cycles at both weather and climate timescales. This proposal seeks funds to be used to cover incidental and travel expenses for U.S.-based graduate students and early career scientists (i.e., within 5 years of receiving their highest degree). We anticipate using DOE funding to support 5-10 people. We will advertise the availability of these funds by providing a box to check for interested participants on the online workshop registration form. We will also send a note to our participants' mailing lists reminding them that the funds are available and asking senior scientists to encourage their more junior colleagues to participate. All meeting participants are encouraged to submit abstracts for oral or poster presentations. The science organizing committee (see below) will base funding decisions on the relevance and quality of these abstracts, with preference given to under-represented populations (especially women and minorities) and to early career scientists being actively mentored at the meeting (e.g. students or postdocs attending the meeting with their advisor).
Revenue Code of 1986 and engage in lobbying activities after December 31, 1995 shall not be eligible for the receipt of Federal funds constituting an award, grant, or loan. ...
White, James M.; Faber, Vance; Saltzman, Jeffrey S.
1992-01-01
An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes which represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete lookup table (LUT) where an 8-bit data signal is enabled to form a display of 24-bit color values. The LUT is formed in a sampling and averaging process from the image color values with no requirement to define discrete Voronoi regions for color compression. Image color values are assigned 8-bit pointers to their closest LUT value whereby data processing requires only the 8-bit pointer value to provide 24-bit color values from the LUT.
Tang, G.; Yuan, F.; Bisht, G.; Hammond, G. E.; Lichtner, P. C.; Kumar, J.; Mills, R. T.; Xu, X.; Andre, B.; Hoffman, F. M.; et al
2015-12-17
tight relative update tolerance. As some biogeochemical processes (e.g., methane and nitrous oxide production and consumption) involve very low half saturation and threshold concentrations, this work provides insights for addressing nonphysical negativity issues and facilitates the representation of a mechanistic biogeochemical description in earth system models to reduce climate prediction uncertainty.« less
Yang, Aileen; Hoek, Gerard; Montagne, Denise; Leseman, Daan L.A.C.; Hellack, Bryan; Kuhlbusch, Thomas A.J.; Cassee, Flemming R.; Brunekreef, Bert; Janssen, Nicole A.H.
2015-07-15
Oxidative potential (OP) of ambient particulate matter (PM) has been suggested as a health-relevant exposure metric. In order to use OP for exposure assessment, information is needed about how well central site OP measurements and modeled average OP at the home address reflect temporal and spatial variation of personal OP. We collected 96-hour personal, home outdoor and indoor PM{sub 2.5} samples from 15 volunteers living either at traffic, urban or regional background locations in Utrecht, the Netherlands. OP was also measured at one central reference site to account for temporal variations. OP was assessed using electron spin resonance (OP{sup ESR}) and dithiothreitol (OP{sup DTT}). Spatial variation of average OP at the home address was modeled using land use regression (LUR) models. For both OP{sup ESR} and OP{sup DTT}, temporal correlations of central site measurements with home outdoor measurements were high (R>0.75), and moderate to high (R=0.49–0.70) with personal measurements. The LUR model predictions for OP correlated significantly with the home outdoor concentrations for OP{sup DTT} and OP{sup ESR} (R=0.65 and 0.62, respectively). LUR model predictions were moderately correlated with personal OP{sup DTT} measurements (R=0.50). Adjustment for indoor sources, such as vacuum cleaning and absence of fume-hood, improved the temporal and spatial agreement with measured personal exposure for OP{sup ESR}. OP{sup DTT} was not associated with any indoor sources. Our study results support the use of central site OP for exposure assessment of epidemiological studies focusing on short-term health effects. - Highlights: • Oxidative potential (OP) of PM was proposed as a health-relevant exposure metric. • We evaluated the relationship between measured and modeled outdoor and personal OP. • Temporal correlations of central site with personal OP are moderate to high. • Adjusting for indoor sources improved the agreement with personal OP. • Our results
Huang, Shao Hui; O'Sullivan, Brian; Ringash, Jolie; Hope, Andrew; Gilbert, Ralph; Irish, Jonathan; Perez-Ordonez, Bayardo; Weinreb, Ilan; Waldron, John
2013-12-01
Purpose: To compare the temporal lymph node (LN) regression and regional control (RC) after primary chemoradiation therapy/radiation therapy in human papillomavirus-related [HPV(+)] versus human papillomavirus-unrelated [HPV(?)] head-and-neck cancer (HNC). Methods and Materials: All cases of N2-N3 HNC treated with radiation therapy/chemoradiation therapy between 2003 and 2009 were reviewed. Human papillomavirus status was ascertained by p16 staining on all available oropharyngeal cancers. Larynx/hypopharynx cancers were considered HPV(?). Initial radiologic complete nodal response (CR) (?1.0 cm 8-12 weeks after treatment), ultimate LN resolution, and RC were compared between HPV(+) and HPV(?) HNC. Multivariate analysis identified outcome predictors. Results: A total of 257 HPV(+) and 236 HPV(?) HNCs were identified. The initial LN size was larger (mean, 2.9 cm vs 2.5 cm; P<.01) with a higher proportion of cystic LNs (38% vs 6%, P<.01) in HPV(+) versus HPV(?) HNC. CR was achieved is 125 HPV(+) HNCs (49%) and 129 HPV(?) HNCs (55%) (P=.18). The mean post treatment largest LN was 36% of the original size in the HPV(+) group and 41% in the HPV(?) group (P<.01). The actuarial LN resolution was similar in the HPV(+) and HPV(?) groups at 12 weeks (42% and 43%, respectively), but it was higher in the HPV(+) group than in the HPV(?) group at 36 weeks (90% vs 77%, P<.01). The median follow-up period was 3.6 years. The 3-year RC rate was higher in the HPV(?) CR cases versus non-CR cases (92% vs 63%, P<.01) but was not different in the HPV(+) CR cases versus non-CR cases (98% vs 92%, P=.14). On multivariate analysis, HPV(+) status predicted ultimate LN resolution (odds ratio, 1.4 [95% confidence interval, 1.1-1.7]; P<.01) and RC (hazard ratio, 0.3 [95% confidence interval 0.2-0.6]; P<.01). Conclusions: HPV(+) LNs involute more quickly than HPV(?) LNs but undergo a more prolonged process to eventual CR beyond the time of initial assessment at 8 to 12 weeks after treatment. Post
Jeffcoat, David B.; DePrince, A. Eugene
2014-12-07
Propagating the equations of motion (EOM) for the one-electron reduced-density matrix (1-RDM) requires knowledge of the corresponding two-electron RDM (2-RDM). We show that the indeterminacy of this expression can be removed through a constrained optimization that resembles the variational optimization of the ground-state 2-RDM subject to a set of known N-representability conditions. Electronic excitation energies can then be obtained by propagating the EOM for the 1-RDM and following the dipole moment after the system interacts with an oscillating external electric field. For simple systems with well-separated excited states whose symmetry differs from that of the ground state, excitation energies obtained from this method are comparable to those obtained from full configuration interaction computations. Although the optimized 2-RDM satisfies necessary N-representability conditions, the procedure cannot guarantee a unique mapping from the 1-RDM to the 2-RDM. This deficiency is evident in the mean-field-quality description of transitions to states of the same symmetry as the ground state, as well as in the inability of the method to describe Rabi oscillations.
Fernando, Sudarshan; Gnaydin, Murat
2014-11-28
We study the minimal unitary representation (minrep) of SO(5, 2), obtained by quantization of its geometric quasiconformal action, its deformations and supersymmetric extensions. The minrep of SO(5, 2) describes a massless conformal scalar field in five dimensions and admits a unique deformation which describes a massless conformal spinor. Scalar and spinor minreps of SO(5, 2) are the 5d analogs of Diracs singletons of SO(3, 2). We then construct the minimal unitary representation of the unique 5d supercon-formal algebra F(4) with the even subalgebra SO(5, 2) SU(2). The minrep of F(4) describes a massless conformal supermultiplet consisting of two scalar andmoreone spinor fields. We then extend our results to the construction of higher spin AdS6/CFT5 (super)-algebras. The Joseph ideal of the minrep of SO(5, 2) vanishes identically as operators and hence its enveloping algebra yields the AdS6/CFT5 bosonic higher spin algebra directly. The enveloping algebra of the spinor minrep defines a deformed higher spin algebra for which a deformed Joseph ideal vanishes identically as operators. These results are then extended to the construction of the unique higher spin AdS6/CFT5 superalgebra as the enveloping algebra of the minimal unitary realization of F(4) obtained by the quasiconformal methods.less
Mitchell, David L.
2013-09-05
It is well known that cirrus clouds play a major role in regulating the earth’s climate, but the details of how this works are just beginning to be understood. This project targeted the main property of cirrus clouds that influence climate processes; the ice fall speed. That is, this project improves the representation of the mass-weighted ice particle fall velocity, V_{m}, in climate models, used to predict future climate on global and regional scales. Prior to 2007, the dominant sizes of ice particles in cirrus clouds were poorly understood, making it virtually impossible to predict how cirrus clouds interact with sunlight and thermal radiation. Due to several studies investigating the performance of optical probes used to measure the ice particle size distribution (PSD), as well as the remote sensing results from our last ARM project, it is now well established that the anomalously high concentrations of small ice crystals often reported prior to 2007 were measurement artifacts. Advances in the design and data processing of optical probes have greatly reduced these ice artifacts that resulted from the shattering of ice particles on the probe tips and/or inlet tube, and PSD measurements from one of these improved probes (the 2-dimensional Stereo or 2D-S probe) are utilized in this project to parameterize V_{m} for climate models. Our original plan in the proposal was to parameterize the ice PSD (in terms of temperature and ice water content) and ice particle mass and projected area (in terms of mass- and area-dimensional power laws or m-D/A-D expressions) since these are the microphysical properties that determine V_{m}, and then proceed to calculate V_{m} from these parameterized properties. But the 2D-S probe directly measures ice particle projected area and indirectly estimates ice particle mass for each size bin. It soon became apparent that the original plan would introduce more uncertainty in the V_{m} calculations
Wagner, A.F.; Schatz, G.C.; Bowman, J.M.
1981-05-01
The DIM surface of Whitlock, Muckerman, and Fisher for the O(/sup 3/P)+H/sub 2/ system is used as a test case to evaluate the usefulness of a variety of fitting functions for the representation of potential energy surfaces. Fitting functions based on LEPS, BEBO, and rotated Morse oscillator (RMO) forms are examined. Fitting procedures are developed for combining information about a small portion of the surface and the fitting function to predict where on the surface more information must be obtained to improve the accuracy of the fit. Both unbiased procedures and procedures heavily biased toward the saddle point region of the surface are investigated. Collinear quasiclassical trajectory calculations of the reaction rate constant and one and three dimensional transition state theory rate constant calculations are performed and compared for selected fits and the exact DIM test surface. Fitting functions based on BEBO and RMO forms are found to give quite accurate results.
Tang, Guoping; Yuan, Fengming; Bisht, Gautam; Hammond, Glenn E.; Lichtner, Peter C.; Collier, Nathaniel O.; Kumar, Jitendra; Mills, Richard T.; Xu, Xiaofeng; Andre, Ben; Hoffman, Forrest M.; Painter, Scott L.; Thornton, Peter E.
2016-01-01
Reactive transport codes (e.g., PFLOTRAN) are increasingly used to improve the representation of biogeochemical processes in terrestrial ecosystem models (e.g., the Community Land Model, CLM). As CLM and PFLOTRAN use explicit and implicit time stepping, implementation of CLM biogeochemical reactions in PFLOTRAN can result in negative concentration, which is not physical and can cause numerical instability and errors. The objective of this work is to address the nonnegativity challenge to obtain accurate, efficient, and robust solutions. We illustrate the implementation of a reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant nitrogen uptake reactions and test the implementation at arctic, temperate, and tropical sites. We examine use of scaling back the update during each iteration (SU), log transformation (LT), and downregulating the reaction rate to account for reactant availability limitation to enforce nonnegativity. Both SU and LT guarantee nonnegativity but with implications. When a very small scaling factor occurs due to either consumption or numerical overshoot, and the iterations are deemed converged because of too small an update, SU can introduce excessive numerical error. LT involves multiplication of the Jacobian matrix by the concentration vector, which increases the condition number, decreases the time step size, and increases the computational cost. Neither SU nor SE prevents zero concentration. When the concentration is close to machine precision or 0, a small positive update stops all reactions for SU, and LT can fail due to a singular Jacobian matrix. The consumption rate has to be downregulated such that the solution to the mathematical representation is positive. A first-order rate downregulates consumption and is nonnegative, and adding a residual concentration makes it positive. For zero-order rate or when the reaction rate is not a function of a reactant, representing the availability limitation of each
Tang, Guoping; Yuan, Fengming; Bisht, Gautam; Hammond, Glenn E.; Lichtner, Peter C.; Collier, Nathaniel O.; Kumar, Jitendra; Mills, Richard T.; Xu, Xiaofeng; Andre, Ben; et al
2016-01-01
Reactive transport codes (e.g., PFLOTRAN) are increasingly used to improve the representation of biogeochemical processes in terrestrial ecosystem models (e.g., the Community Land Model, CLM). As CLM and PFLOTRAN use explicit and implicit time stepping, implementation of CLM biogeochemical reactions in PFLOTRAN can result in negative concentration, which is not physical and can cause numerical instability and errors. The objective of this work is to address the nonnegativity challenge to obtain accurate, efficient, and robust solutions. We illustrate the implementation of a reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant nitrogen uptake reactions and test the implementation atmore » arctic, temperate, and tropical sites. We examine use of scaling back the update during each iteration (SU), log transformation (LT), and downregulating the reaction rate to account for reactant availability limitation to enforce nonnegativity. Both SU and LT guarantee nonnegativity but with implications. When a very small scaling factor occurs due to either consumption or numerical overshoot, and the iterations are deemed converged because of too small an update, SU can introduce excessive numerical error. LT involves multiplication of the Jacobian matrix by the concentration vector, which increases the condition number, decreases the time step size, and increases the computational cost. Neither SU nor SE prevents zero concentration. When the concentration is close to machine precision or 0, a small positive update stops all reactions for SU, and LT can fail due to a singular Jacobian matrix. The consumption rate has to be downregulated such that the solution to the mathematical representation is positive. A first-order rate downregulates consumption and is nonnegative, and adding a residual concentration makes it positive. For zero-order rate or when the reaction rate is not a function of a reactant, representing the availability limitation
Jakob, Christian
2015-02-26
This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.
Mielke, Steven L; Schwenke, David; Schatz, George C; Garrett, Bruce C; Peterson, Kirk A
2009-04-23
Multireference configuration interaction (MRCI) calculations of the Born-Oppenheimer diagonal correction (BODC) for H_{3} were performed at 1397 symmetry-unique configurations using the Born-Handy approach; isotopic substitution leads to 4041 symmetry-unique configurations for the DH_{2} mass combination. These results were then fit to a functional form that permits calculation of the BODC for any combination of isotopes. Mean unsigned fitting errors on a test grid of configurations not included in the fitting process were 0.14, 0.12, and 0.65 cm^{-1} for the H_{3}, DH_{2}, and MuH_{2} isotopomers, respectively. This representation can be combined with any Born-Oppenheimer potential energy surface (PES) to yield Born-Huang (BH) PESs; herein we choose the CCI potential energy surface, the uncertainties of which (~0.01 kcal/mol) are much smaller than the magnitude of the BODC. FORTRAN routines to evaluate these BH surfaces are provided. Variational transition state theory calculations are presented comparing thermal rate constants for reactions on the BO and BH surfaces to provide an initial estimate of the significance of the diagonal correction for the dynamics.
Interpolations of nuclide-specific scattering kernels generated with Serpent
Scopatz, A.; Schneider, E.
2012-07-01
The neutron group-to-group scattering cross section is an essential input parameter for any multi-energy group physics model. However, if the analyst prefers to use Monte Carlo transport to generate group constants this data is difficult to obtain for a single species of a material. Here, the Monte Carlo code Serpent was modified to return the group transfer probabilities on a per-nuclide basis. This ability is demonstrated in conjunction with an essential physics reactor model where cross section perturbations are used to dynamically generate reactor state dependent group constants via interpolation from pre-computed libraries. The modified version of Serpent was therefore verified with three interpolation cases designed to test the resilience of the interpolation scheme to changes in intra-group fluxes. For most species, interpolation resulted in errors of less than 5% of transport-computed values. For important scatterers, such as {sup 1}H, errors less than 2% were observed. For nuclides with high errors ( > 10%), the scattering channel typically only had a small probability of occurring. (authors)
Simulation Problem Analysis and Research Kernel | Open Energy...
Open Energy Information (Open El) [EERE & EIA]
updating Image needs updating Reference needed Missing content Broken link Other Additional Comments Cancel Submit Categories: Tools Stubs Articles with outstanding TODO tasks...
Kernel Integration Code System--Multigroup Gamma-Ray Scattering.
Energy Science and Technology Software Center (OSTI)
1988-02-15
GGG (G3) is the generic designation for a series of computer programs that enable the user to estimate gamma-ray scattering from a point source to a series of point detectors. Program output includes detector response due to each source energy, as well as a grouping by scattered energy in addition to a simple, unscattered beam result. Although G3 is basically a single-scatter program, it also includes a correction for multiple scattering by applying a buildupmore » factor for the path segment between the point of scatter and the detector point. Results are recorded with and without the buildup factor. Surfaces, defined by quadratic equations, are used to provide for a full three-dimensional description of the physical geometry. G3 evaluates scattering effects in those situations where more exact techniques are not economical. G3 was revised by Bettis and the name was changed to indicate that it was no longer identical to the G3 program. The name S3 was chosen since the scattering calculation has three steps: calculation of the flux arriving at the scatterer from the point source, calculation of the differential scattering cross section, and calculation of the scattered flux arriving at the detector.« less
Anchordoqui, Luis A.; Goldberg, Haim; Huang, Xing; Vlcek, Brian J.
2014-06-17
The tensor-to-scalar ratio (r=0.20{sub −0.05}{sup +0.07}) inferred from the excess B-mode power observed by the Background Imaging of Cosmic Extragalactic Polarization (BICEP2) experiment is almost twice as large as the 95% CL upper limits derived from temperature measurements of the WMAP (r<0.13) and Planck (r<0.11) space missions. Very recently, it was suggested that additional relativistic degrees of freedom beyond the three active neutrinos and photons can help to relieve this tension: the data favor an effective number of light neutrino species N{sub eff}=3.86±0.25. Since the BICEP2 ratio implies the energy scale of inflation (V{sub ∗}{sup 1/4}∼2×10{sup 16} GeV) is comparable to the grand unification scale, in this paper we investigate whether we can accommodate the required N{sub eff} with three right-handed (partners of the left-handed standard model) neutrinos living in the fundamental representation of a grand unified exceptional E{sub 6} group. We show that the superweak interactions of these Dirac states (through their coupling to a TeV-scale Z{sup ′} gauge boson) lead to decoupling of right-handed neutrino just above the QCD cross over transition: 175 MeV≲T{sub ν{sub R}{sup dec}}≲250 MeV. For decoupling in this transition region, the contribution of the three right-handed neutrinos to N{sub eff} is suppressed by heating of the left-handed neutrinos (and photons). Consistency (within 1σ) with the favored N{sub eff} is achieved for 4.5 TeV
Generalized REGression Package for Nonlinear Parameter Estimation
Energy Science and Technology Software Center (OSTI)
1995-05-15
GREG computes modal (maximum-posterior-density) and interval estimates of the parameters in a user-provided Fortran subroutine MODEL, using a user-provided vector OBS of single-response observations or matrix OBS of multiresponse observations. GREG can also select the optimal next experiment from a menu of simulated candidates, so as to minimize the volume of the parametric inference region based on the resulting augmented data set.
Xu, T.T.; Sathaye, J.; Galitsky, C.
2010-09-30
Adoption of efficient end-use technologies is one of the key measures for reducing greenhouse gas (GHG) emissions. With the working of energy programs and policies on carbon regulation, how to effectively analyze and manage the costs associated with GHG reductions become extremely important for the industry and policy makers around the world. Energy-climate (EC) models are often used for analyzing the costs of reducing GHG emissions (e.g., carbon emission) for various emission-reduction measures, because an accurate estimation of these costs is critical for identifying and choosing optimal emission reduction measures, and for developing related policy options to accelerate market adoption and technology implementation. However, accuracies of assessing of GHG-emission reduction costs by taking into account the adoption of energy efficiency technologies will depend on how well these end-use technologies are represented in integrated assessment models (IAM) and other energy-climate models. In this report, we first conduct brief overview on different representations of end-use technologies (mitigation measures) in various energy-climate models, followed by problem statements, and a description of the basic concepts of quantifying the cost of conserved energy including integrating non-regrets options. A non-regrets option is defined as a GHG reduction option that is cost effective, without considering their additional benefits related to reducing GHG emissions. Based upon these, we develop information on costs of mitigation measures and technological change. These serve as the basis for collating the data on energy savings and costs for their future use in integrated assessment models. In addition to descriptions of the iron and steel making processes, and the mitigation measures identified in this study, the report includes tabulated databases on costs of measure implementation, energy savings, carbon-emission reduction, and lifetimes. The cost curve data on mitigation
Babcock, Kerry; Sidhu, Narinder
2010-02-15
Purpose: Due to limitations in computer memory and computation time, typical radiation therapy treatments are calculated with a voxel dimension on the order of several millimeters. The anatomy below this practical resolution is approximated as a homogeneous region uniform in atomic composition and density. The purpose of this article is to examine whether the exclusion of anatomic structure below the practical dose calculation resolution produces deviations in the resulting dose distributions. Methods: EGSnrc calculated dose distributions from the BRANCH lung model of Part I are compared and contrasted to dose distributions from a CT representation of the same BRANCH model for three different phases of the respiration cycle. Results: The exclusion of branching structures below a CT resolution of 1x1x2 mm{sup 3} resulted in a deviation in dose. The deviation in dose was as high as 14% but was localized around the branching structures. There was no significant variation in the dose deviation as a function of either field size or lung density. Conclusions: The exclusion of explicit branching structures of the lung in a CT representation creates localized deviations in dose. To ensure accurate dose calculations, CT resolution must be increased.
Energy Science and Technology Software Center (OSTI)
2002-07-15
SNL-ptc2acis translates Pro/Engineer descriptions of parts, assemblies, and cross-sections to ACIS representation. It is developed using Pro/Toolkit and the ACIS kernel. As such, it requires a Pro/Engineer license in order to execute, but is not subject to the issues of file encryption as a direct file reader would be.
Statistical representation of clouds in climate models
U.S. Department of Energy (DOE) all webpages (Extended Search)
approach for representing ice microphysics in bin and bulk schemes: Application to TWP-ICE deep convection Hugh Morrison and Wojciech Grabowski National Center for Atmospheric Research ARM STM, Monday, April 1, 2009 -1) Uncertainty of ice initiation processes -2) Wide range of ice particle characteristics (e.g., shape, effective density) -3) No clear separation of physical processes for small and large crystals The treatment of ice microphysics has a large impact on model simulations, e.g.,
PART IV REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
... services for which rates are set by law or regulation. (ii) 52.203-11, ... treated by DOE, to the extent permitted by law, as business or financial information ...
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
Section L, Page i SECTION L INSTRUCTIONS, CONDITIONS, AND NOTICES TO OFFERORS OR RESPONDENTS TABLE OF CONTENTS L-1 SYSTEMS FOR AWARD MANAGEMENT (JUL 2013) ..................................................................... 148 L-2 FAR 52.215-1 INSTRUCTIONS TO OFFERORS - COMPETITIVE ACQUISITION (JAN 2004) 150 L-3 FAR 52.216-1 TYPE OF CONTRACT (APR 1984) ............................................................................... 155 L-4 FAR 52.222-24 PREAWARD ON-SITE EQUAL OPPORTUNITY
explicit representation of uncertainty in system load
U.S. Department of Energy (DOE) all webpages (Extended Search)
system load - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & ... Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear ...
Exploiting data representation for fault tolerance
Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; Mueller, F.
2015-01-06
Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less
Exploiting data representation for fault tolerance
Hoemmen, Mark Frederick; Elliott, J.; Mueller, F.
2015-01-06
Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product's result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.
A representation for efficient temporal reasoning
Delgrande, J.P.; Gupta, A.
1996-12-31
It has been observed that the temporal reasoning component in a knowledge-based system is frequently a bottleneck. We investigate here a class of graphs appropriate for an interesting class of temporal domains and for which very efficient algorithms for reasoning are obtained, that of series-parallel graphs. These graphs can be used for example to model process execution, as well as various planning or scheduling activities. Events are represented by nodes of a graph and relationships are represented by edges labeled by {le} or <. Graphs are composed using a sequence of series and parallel steps (recursively) on series-parallel graphs. We show that there is an O(n) time preprocessing algorithm that allows us to answer queries about the events in O(l) time. Our results make use of a novel embedding of the graphs on the plane that is of independent interest. Finally we argue that these results may be incorporated in general graphs representing temporal events by extending the approach of Gerevini and Schubert.
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
... D. Name, Title, Phone Number, and Email of Supervisor: E. General Summary: ... clauses, Part III, Section J -- List of Documents, Exhibits and Other Attachments, and ...
PART IV REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
... promises to repay a specific amount of money to the bondholder, plus a certain amount ... Total Capital Commitment: The sum of money and other property an enterprise uses in ...
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
... Unbalanced pricing exists when, despite an acceptable total evaluated price, the price of ... Contracting Officer U. S. Department of Energy, National Nuclear Security Administration ...
3D Modeling Engine Representation Summary Report
Steven Prescott; Ramprasad Sampath; Curtis Smith; Timothy Yang
2014-09-01
Computers have been used for 3D modeling and simulation, but only recently have computational resources been able to give realistic results in a reasonable time frame for large complex models. This summary report addressed the methods, techniques, and resources used to develop a 3D modeling engine to represent risk analysis simulation for advanced small modular reactor structures and components. The simulations done for this evaluation were focused on external events, specifically tsunami floods, for a hypothetical nuclear power facility on a coastline.
A situated knowledge representation of geographical information
Gahegan, Mark N.; Pike, William A.
2006-11-01
In this paper we present an approach to conceiving of, constructing and comparing the concepts developed and used by geographers, environmental scientists and other earth science researchers to help describe, analyze and ultimately understand their subject of study. Our approach is informed by the situations under which concepts are conceived and applied, captures details of their construction, use and evolution and supports their ultimate sharing along with the means for deep exploration of conceptual similarities and differences that may arise among a distributed network of researchers. The intent here is to support different perspectives onto GIS resources that researchers may legitimately take, and to capture and compute with aspects of epistemology, to complement the ontologies that are currently receiving much attention in the GIScience community.
Method for contour extraction for object representation
Skourikhine, Alexei N.; Prasad, Lakshman
2005-08-30
Contours are extracted for representing a pixelated object in a background pixel field. An object pixel is located that is the start of a new contour for the object and identifying that pixel as the first pixel of the new contour. A first contour point is then located on the mid-point of a transition edge of the first pixel. A tracing direction from the first contour point is determined for tracing the new contour. Contour points on mid-points of pixel transition edges are sequentially located along the tracing direction until the first contour point is again encountered to complete tracing the new contour. The new contour is then added to a list of extracted contours that represent the object. The contour extraction process associates regions and contours by labeling all the contours belonging to the same object with the same label.
explicit representation of uncertainty in solar generation
U.S. Department of Energy (DOE) all webpages (Extended Search)
solar generation - Sandia Energy Energy Search Icon Sandia ... Secure & Sustainable Energy Future Stationary Power Energy ... National Solar Thermal Test Facility Nuclear ...
explicit representation of uncertainty in wind generation
U.S. Department of Energy (DOE) all webpages (Extended Search)
wind generation - Sandia Energy Energy Search Icon Sandia ... Secure & Sustainable Energy Future Stationary Power Energy ... National Solar Thermal Test Facility Nuclear ...
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
... (2) The offeror shall enter, in the block with its name ... If this solicitation is amended, all terms and conditions ... To facilitate the Government's search for key words during ...
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
... of the rights of the United States in inventions conceived or first actually reduced to ... of the United States in identified inventions, i.e., individual inventions conceived ...
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
148 L-2 FAR 52.215-1 INSTRUCTIONS TO OFFERORS -- COMPETITIVE ACQUISITION (JAN 2004) 150 L-3 FAR 52.216-1 TYPE OF CONTRACT (APR 1984) ............................................................................... 155 L-4 FAR 52.222-24 PREAWARD ON-SITE EQUAL OPPORTUNITY COMPLIANCE EVALUATION (FEB 1999) ................................................................................................................................................................ 155 L-5 FAR 52.233-2 SERVICE OF
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
.... 1 L-2 FAR 52.215-1 INSTRUCTIONS TO OFFERORS -- COMPETITIVE ACQUISITION (JAN 2004) ... 3 L-3 FAR 52.216-1 TYPE OF CONTRACT (APR 1984) ................................................................................... 8 L-4 FAR 52.222-24 PREAWARD ON-SITE EQUAL OPPORTUNITY COMPLIANCE EVALUATION (FEB 1999) .................................................................................................................................................................... 8 L-5 FAR 52.233-2
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
148 L-2 FAR 52.215-1 INSTRUCTIONS TO OFFERORS -- COMPETITIVE ACQUISITION (JAN 2004) 150 L-3 FAR 52.216-1 TYPE OF CONTRACT (APR 1984) ............................................................................... 155 L-4 FAR 52.222-24 PREAWARD ON-SITE EQUAL OPPORTUNITY COMPLIANCE EVALUATION (FEB 1999) ................................................................................................................................................................ 155 L-5 FAR 52.233-2 SERVICE OF
Recursive bias estimation for high dimensional regression smoothers
Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric
2009-01-01
In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoother. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct of the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in details the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting, For multivariate thin plate spline smoother, we proved that our procedure adapts to the correct and unknown order of smoothness for estimating an unknown function m belonging to H({nu}) (Sobolev space where m should be bigger than d/2). We apply our method to simulated and real data and show that our method compares favorably with existing procedures.
Evaluating multimedia chemical persistence: Classification and regression tree analysis
Bennett, D.H.; McKone, T.E.; Kastenberg, W.E.
2000-04-01
For the thousands of chemicals continuously released into the environment, it is desirable to make prospective assessments of those likely to be persistent. Widely distributed persistent chemicals are impossible to remove from the environment and remediation by natural processes may take decades, which is problematic if adverse health or ecological effects are discovered after prolonged release into the environment. A tiered approach using a classification scheme and a multimedia model for determining persistence is presented. Using specific criteria for persistence, a classification tree is developed to classify a chemical as persistent or nonpersistent based on the chemical properties. In this approach, the classification is derived from the results of a standardized unit world multimedia model. Thus, the classifications are more robust for multimedia pollutants than classifications using a single medium half-life. The method can be readily implemented and provides insight without requiring extensive and often unavailable data. This method can be used to classify chemicals when only a few properties are known and can be used to direct further data collection. Case studies are presented to demonstrate the advantages of the approach.
Partial Support Vector Regression to Mitigate Silent Errors in...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Computer Engineering with double major in Mathematics from Koc University, Istanbul, Turkey in 2009. He got his Master degree in Computer Science and Engineering from Koc...
ORISE-09-OEWH-0176 POISSON REGRESSION ANALYSIS OF ILLNESS AND...
U.S. Department of Energy (DOE) all webpages (Extended Search)
... Science and Mathematics Division at the Oak ... Journal of the American Statistical Association ... Number Scientic Publication 82. International Agency for Research on ...
A Principled Kernel Testbed for Hardware/Software Co-Design Research
Kaiser, Alex; Williams, Samuel; Madduri, Kamesh; Ibrahim, Khaled; Bailey, David; Demmel, James; Strohmaier, Erich
2010-04-01
Recently, advances in processor architecture have become the driving force for new programming models in the computing industry, as ever newer multicore processor designs with increasing number of cores are introduced on schedules regimented by marketing demands. As a result, collaborative parallel (rather than simply concurrent) implementations of important applications, programming languages, models, and even algorithms have been forced to adapt to these architectures to exploit the available raw performance. We believe that this optimization regime is flawed. In this paper, we present an alternate approach that, rather than starting with an existing hardware/software solution laced with hidden assumptions, defines the computational problems of interest and invites architects, researchers and programmers to implement novel hardware/software co-designed solutions. Our work builds on the previous ideas of computational dwarfs, motifs, and parallel patterns by selecting a representative set of essential problems for which we provide: An algorithmic description; scalable problem definition; illustrative reference implementations; verification schemes. This testbed will enable comparative research in areas such as parallel programming models, languages, auto-tuning, and hardware/software codesign. For simplicity, we focus initially on the computational problems of interest to the scientific computing community but proclaim the methodology (and perhaps a subset of the problems) as applicable to other communities. We intend to broaden the coverage of this problem space through stronger community involvement.
Point Kernel Calculation for Complex and Time-Dependent Gamma-Ray Source Spectra.
Energy Science and Technology Software Center (OSTI)
1990-04-01
Version 00 PRESTO is written especially for simple shielding design studies. The chosen approximation is due to calculations of shielding for piping and spherical/cylindrical containers. Surface sources built up by radioactive deposits can be estimated. PRESTO I treats cylinder sources with shields at the side, such as pipelines or containers in radioactive facilities. PRESTO II is the analogous code for spherical sources. The programs consider volume sources or a combination of volume and surface sources.more » To describe the source spectrum, one begins with the nuclides contained in the source mixture or (with the aid of PRESTO IA) from energy group sets. The internal data set contains 5 common shield construction materials.« less
General Purpose Kernel Integration Shielding Code System-Point and Extended Gamma-Ray Sources.
Energy Science and Technology Software Center (OSTI)
1981-06-11
PELSHIE3 calculates dose rates from gamma-emitting sources with different source geometries and shielding configurations. Eight source geometries are provided and are called by means of geometry index numbers. Gamma-emission characteristics for 134 isotopes, attenuation coefficients for 57 elements or shielding materials and Berger build-up parameters for 17 shielding materials can be obtained from a direct access data library by specifying only the appropriate library numbers. A different option allows these data to be read frommore » cards. For extended sources, constant source strengths as well as exponential and Bessel function source strength distributions are allowed in most cases.« less
T-667: Red Hat Enterprise Linux kernel security and bug fix update
It was found that an mmap() call with the MAP_PRIVATE flag on "/dev/zero" would create transparent hugepages and trigger a certain robustness check. A local, unprivileged user could use this flaw to cause a denial of service. (CVE-2011-2479, Moderate)
U-068:Linux Kernel SG_IO ioctl Bug Lets Local Users Gain Elevated Privileges
Vulnerability was reported in FreeBSD Telnet. A remote user can execute arbitrary code on the target system.
Wave kernels for the Dirac, Euler operators and the harmonic oscillator
Mohameden, Ahmedou Yahya Ould Moustapha, Mohamed Vall Ould
2014-03-15
Explicit solutions for the wave equations associated to the Dirac, Euler operators and the harmonic oscillator are given.
Shet, Aniruddha G; Elwasif, Wael R; Harrison, Robert J; Bernholdt, David E
2008-04-01
As high-end computer systems present users with rapidly increasing numbers of processors, possibly also incorporating attached co-processors, programmers are increasingly challenged to express the necessary levels of concurrency with the dominant parallel programming model, Fortran+MPI+OpenMP (or minor variations). In this paper, we examine the languages developed under the DARPA High-Productivity Computing Systems (HPCS) program (Chapel, Fortress, and X10) as representatives of a different parallel programming model which might be more effective on coming high-performance systems. The application used in this study is the Hartree-Fock method from quantum chemistry, which combines access to distributed data with a task-parallel algorithm and is characterized by significant irregularity in the computational tasks. We present several different implementation strategies for load balancing of the task parallel computation, as well as distributed array operations, in each of the three languages. We conclude that the HPCS languages provide a wide variety of mechanisms for expressing parallelism, which can be combined at multiple levels, making them quite expressive for this problem.
Programmability of the HPCS Languages: A Case Study with a Quantum Chemistry Kernel
Shet, Aniruddha G; Elwasif, Wael R; Harrison, Robert J; Bernholdt, David E
2008-01-01
As high-end computer systems present users with rapidly increasing numbers of processors, possibly also incorporating attached co-processors, programmers are increasingly challenged to express the necessary levels of concurrency with the dominant parallel programming model, Fortran+MPI+OpenMP (or minor variations). In this paper, we examine the languages developed under the DARPA High-Productivity Computing Systems (HPCS) program (Chapel, Fortress, and X10) as representatives of a different parallel programming model which might be more effective on coming high-performance systems. The application used in this study is the Hartree-Fock method from quantum chemistry, which combines access to distributed data with a task-parallel algorithm and is characterized by significant irregularity in the computational tasks. We present several different implementation strategies for load balancing of the task parallel computation, as well as distributed array operations, in each of the three languages. We conclude that the HPCS languages provide a wide variety of mechanisms for expressing parallelism, which can be combined at multiple levels, making them quite expressive for this problem.
A non-Gaussian treatment of radiation pencil beams
Pomraning, G.C.
1997-10-01
The problem of describing steady-state transport of a perpendicularly incident particle beam through a thin slab of material is considered. For a scattering kernel sufficiently peaked in momentum transfer to allow a Fokker-Planck description of the scattering process in both energy and angle, an approximate closed form solution to this problem was obtained almost 50 yr ago and is referred to as the Fermi-Eyges formula. It is shown that a Fermi-Eyges-like formula can be derived for a broader class of scattering kernels. This class consists of scattering described by the continuous slowing-down approximation (the Fokker-Planck description is energy), but not sufficiently forward peaked in angle to allow an angular Fokker-Planck representation. This generalized formula reduces to the classic Fermi-Eyges result for scattering operators with a valid Fokker-Planck limit and also describes problems that, while involving a forward-peaked scattering kernel, do not possess a Fokker-Planck description. A classic example of such a kernel is the Henyey-Greenstein kernel, and the Fermi-Eyges-like solution in this case exhibits more beam spreading than that predicted by the classic Fermi-Eyges formula. In particular, the scalar flux is non-Gaussian in the radial coordinate, as contrasted with the Gaussian Fermi-Eyges result.
Percent of Industrial Natural Gas Deliveries in Connecticut Represente...
Gasoline and Diesel Fuel Update
Decade Year-0 Year-1 Year-2 Year-3 Year-4 Year-5 Year-6 Year-7 Year-8 Year-9 1990's 66.4 55.8 55.8 2000's 47.3 54.0 48.9 45.3 44.0 46.4 48.5 50.0 47.3 37.5 2010's 31.1 31.0 32.3...
Compact representation of radiation patterns using spherical mode expansions
Simpson, T.L.; Chen, Yinchao . Dept. of Electrical and Computer Engineering)
1990-07-15
This report presents the results of an investigation of SM (Spherical Mode) expansions as a compact and efficient alternative to the use of current distributions for generating radiation patterns. The study included three areas: (1) SM expansion from the radiation pattern; (2) SM expansion from the antenna current; and (3) Literature search. SM expansions were obtained from radiation patterns during the initial phase of this study. Although straightforward in principal, however, this technique was found to be awkward for the treatment on theoretical radiation patterns. It is included here for completeness and for possible use to summarize experimental results in a more meaningful way than with an exhaustive display of amplitude with azimuth and elevation angles. In essence, the work in this area served as as warm-up problem to develop our skills in computing and manipulating spherical modes as mathematical entities. 6 refs., 21 figs., 6 tabs.
Index of public plots and other data representations
U.S. Department of Energy (DOE) all webpages (Extended Search)
the purity monitor appear here from the Phase I operation of the MicroBooNE cryogenics system. The electron drift lifetime can be determined from taking the ratio of the heights...
Representation of Dormant and Active Microbial Dynamics for Ecosystem Modeling
Wang, Gangsheng; Mayes, Melanie; Gu, Lianhong; Schadt, Christopher Warren
2014-01-01
Dormancy is an essential strategy for microorganisms to cope with environmental stress. However, global ecosystem models typically ignore microbial dormancy, resulting in notable model uncertainties. To facilitate the consideration of dormancy in these large-scale models, we propose a new microbial physiology component that works for a wide range of substrate availabilities. This new model is based on microbial physiological states and the major parameters are the maximum specific growth and maintenance rates of active microbes and the ratio of dormant to active maintenance rates. A major improvement of our model over extant models is that it can explain the low active microbial fractions commonly observed in undisturbed soils. Our new model shows that the exponentially-increasing respiration from substrate-induced respiration experiments can only be used to determine the maximum specific growth rate and initial active microbial biomass, while the respiration data representing both exponentially-increasing and non-exponentially-increasing phases can robustly determine a range of key parameters including the initial total live biomass, initial active fraction, the maximum specific growth and maintenance rates, and the half-saturation constant. Our new model can be incorporated into existing ecosystem models to account for dormancy in microbially-driven processes and to provide improved estimates of microbial activities.
Simple Model Representations of Transport in a Complex Fracture...
Office of Scientific and Technical Information (OSTI)
It is common, however, to represent the complex fracture by much simpler models consisting ... Simple-model properties are often inferred from the analysis of short-term (one to a few ...
Analysis and Representation of Miscellaneous Electric Loads in NEMS -
U.S. Energy Information Administration (EIA) (indexed site)
Energy Information Administration Analysis & Projections Glossary › FAQS › Overview Projection Data Monthly short-term forecasts through the next calender year Annual projections to 2040 International projections All projections reports Analysis & Projections Major Topics Most popular Annual Energy Outlook related Congressional & other requests International Energy Outlook related Presentations Recurring Short-Term Outlook Related Special outlooks Testimony All reports Browse
Program for Converting C/C++ Texts to LATEX Representation
Energy Science and Technology Software Center (OSTI)
1994-12-30
Used to insert pieces of C/C++ code into LATEXdocuments with fontified Language tokens. Different fonst can be used for different language constructions such as float, while, etc.
beta. -Decay in the Skyrme-Witten representation of QCD
Snyderman, N.J.
1991-05-01
The renormalized coupling strength of the {beta}-decay axial vector current is related to {pi}{plus minus} p cross sections through the Adler-Weisberger sum rule, that follows from chiral symmetry. We attempt to understand the Adler-Weisberger sum rule in the 1/N{sub c} expansion in QCD, and in the Skyrme-Witten model that realizes the 1/N{sub c} expansion in the low energy limit, using it to explicitly calculate both g{sub A} and the {pi}{plus minus} p cross sections. 32 refs.
Impact of aerosol size representation on modeling aerosolâ*...
Office of Scientific and Technical Information (OSTI)
... Technol., 20, 1 -30, 1994. Jacobson, M. Z., Development and application of a new air pollution mod- eling system, II, Aerosol module structure and design, Atmos. Environ., 31, ...
Light-front representation of chiral dynamics in peripheral transverse...
Office of Scientific and Technical Information (OSTI)
The method can be applied to nucleon form factors of other operators, e.g. the energy-momentum tensor. Authors: Granados, Carlos G. 1 ; Weiss, Christian 2 + Show Author ...
Dynamic Potential Intensity: An improved representation of the...
Office of Scientific and Technical Information (OSTI)
Citation Details In-Document Search Title: Dynamic Potential Intensity: An improved ... average of temperature down to a fixed depth was proposed as a replacement for SST ...
Representations and image classification methods for Cherenkov telescopes
Malagon, C.; Parcerisa, D. S.; Barrio, J. A.; Nieto, D.
2008-05-29
The problem of identifying gamma ray events out of charged cosmic ray background (so called hadrons) in Cherenkov telescopes is one of the key problems in VHE gamma ray astronomy. In this contribution, we present a novel approach to this problem by implementing different classifiers relying on the information of each pixel of the camera of a Cherenkov telescope.
Delay correlation analysis and representation for vital complaint VHDL models
Rich, Marvin J.; Misra, Ashutosh
2004-11-09
A method and system unbind a rise/fall tuple of a VHDL generic variable and create rise time and fall time generics of each generic variable that are independent of each other. Then, according to a predetermined correlation policy, the method and system collect delay values in a VHDL standard delay file, sort the delay values, remove duplicate delay values, group the delay values into correlation sets, and output an analysis file. The correlation policy may include collecting all generic variables in a VHDL standard delay file, selecting each generic variable, and performing reductions on the set of delay values associated with each selected generic variable.
Percent of Industrial Natural Gas Deliveries in Connecticut Represente...
U.S. Energy Information Administration (EIA) (indexed site)
Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2001 66.1 48.5 50.9 50.2 58.7 44.3 34.1 58.5 55.7 73.8 58.9 51.8 2002 45.0 47.4 53.0 41.3 52.5 50.1 38.1 49.3 53.9 52.2 49.1 ...
Signatures of quantum chaos in Wigner and Husimi representations
Lee, S.B.; Feit, M.D. (Physics Department, Lawrence Livermore National Laboratory, Livermore, California 94550 (United States) Department of Applied Science, University of California, Davis/Livermore, Livermore, California 94550 (United States))
1993-06-01
In this paper, we study the quantum manifestations of classical chaos in phase space using Wigner and Husimi distribution functions. We test the claim that Husimi represents the correspondence better than Wigner does. The results show the claim is valid. We also use a quantum dissipation scheme empirically for classically damped motions often characterized by strange attractors. We believe quantum resemblance to classical distributions can be regarded as signatures of quantum chaos in phase space.
Percent of Commercial Natural Gas Deliveries in Connecticut Represente...
U.S. Energy Information Administration (EIA) (indexed site)
Decade Year-0 Year-1 Year-2 Year-3 Year-4 Year-5 Year-6 Year-7 Year-8 Year-9 1990's 96.0 93.0 96.5 98.1 80.9 82.0 87.0 81.9 68.7 62.8 2000's 78.3 77.6 72.4 68.1 69.0 70.3 71.0 71.5...
The Institute for Public Representation, on behalf of the Potomac...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
1, 2007. PDF icon The Institute for Public ... More Documents & Publications Comments on Department of ... under U.S. Department of Energy Emergency Orders Regarding ...
Graphical representation of parallel algorithmic processes. Master's thesis
Williams, E.M.
1990-12-01
Algorithm animation is a visualization method used to enhance understanding of functioning of an algorithm or program. Visualization is used for many purposes, including education, algorithm research, performance analysis, and program debugging. This research applies algorithm animation techniques to programs developed for parallel architectures, with specific on the Intel iPSC/2 hypercube. While both P-time and NP-time algorithms can potentially benefit from using visualization techniques, the set of NP-complete problems provides fertile ground for developing parallel applications, since the combinatoric nature of the problems makes finding the optimum solution impractical. The primary goals for this visualization system are: Data should be displayed as it is generated. The interface to the targe program should be transparent, allowing the animation of existing programs. Flexibility - the system should be able to animate any algorithm. The resulting system incorporates and extends two AFIT products: the AFIT Algorithm Animation Research Facility (AAARF) and the Parallel Resource Analysis Software Environment (PRASE). AAARF is an algorithm animation system developed primarily for sequential programs, but is easily adaptable for use with parallel programs. PRASE is an instrumentation package that extracts system performance data from programs on the Intel hypercubes. Since performance data is an essential part of analyzing any parallel program, views of the performance data are provided as an elementary part of the system. Custom software is designed to interface these systems and to display the program data. The program chosen as the example for this study is a member of the NP-complete problem set; it is a parallel implementation of a general.
Toward a Minimal Representation of Aerosols in Climate Models...
Office of Scientific and Technical Information (OSTI)
and external mixing between aerosol components, treating numerous complicated aerosol ... black carbon (BC) with other aerosol components, merging of the MAM7 fine dust and fine ...
Energy Science and Technology Software Center (OSTI)
2007-09-05
The SPL emulates Solaris kernel funtionality in the linux kernel, thus making it trivial to bring open source Solaris code into the linux kernel
Burke, Timothy Patrick; Kiedrowski, Brian; Martin, William R.; Brown, Forrest B.
2015-08-27
KDEs show potential reducing variance for global solutions (flux, reaction rates) when compared to histogram solutions.
Spectrotemporal CT data acquisition and reconstruction at low dose
Clark, Darin P.; Badea, Cristian T.; Lee, Chang-Lung; Kirsch, David G.
2015-11-15
Purpose: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D + dual energy + time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. Methods: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction
Fine, Dana S.; Sawin, Stephen
2014-06-15
Following Feynman's prescription for constructing a path integral representation of the propagator of a quantum theory, a short-time approximation to the propagator for imaginary-time, N = 1 supersymmetric quantum mechanics on a compact, even-dimensional Riemannian manifold is constructed. The path integral is interpreted as the limit of products, determined by a partition of a finite time interval, of this approximate propagator. The limit under refinements of the partition is shown to converge uniformly to the heat kernel for the Laplace-de Rham operator on forms. A version of the steepest descent approximation to the path integral is obtained, and shown to give the expected short-time behavior of the supertrace of the heat kernel.
Narumalani, S.; Jensen, J.R.; Althausen, J.D.; Burkhalter, S.; Mackey, H.E. Jr.
1994-06-01
Since aquatic macrophytes have an important influence on the physical and chemical processes of an ecosystem while simultaneously affecting human activity, it is imperative that they be inventoried and managed wisely. However, mapping wetlands can be a major challenge because they are found in diverse geographic areas ranging from small tributary streams, to shrub or scrub and marsh communities, to open water lacustrian environments. In addition, the type and spatial distribution of wetlands can change dramatically from season to season, especially when nonpersistent species are present. This research, focuses on developing a model for predicting the future growth and distribution of aquatic macrophytes. This model will use a geographic information system (GIS) to analyze some of the biophysical variables that affect aquatic macrophyte growth and distribution. The data will provide scientists information on the future spatial growth and distribution of aquatic macrophytes. This study focuses on the Savannah River Site Par Pond (1,000 ha) and L Lake (400 ha) these are two cooling ponds that have received thermal effluent from nuclear reactor operations. Par Pond was constructed in 1958, and natural invasion of wetland has occurred over its 35-year history, with much of the shoreline having developed extensive beds of persistent and non-persistent aquatic macrophytes.
Field, M.E. ); Trincardi, F. )
1990-05-01
The shelf of the eastern Tyrrhenian margin changes substantially in width, shelf-break depth, and sea-floor steepness over relatively short distances, largely due to marked lateral changes in geologic structure. Remnants of late Pleistocene prograded coastal deposits are locally preserved on the middle and outer parts of this complex shelf. Through the authors studies of these prograded deposits they recognize two major controls on the distribution, lateral extent, thickness, and preservation potential. First, prograded (downlapped) deposits formed only where the physiographic shelf break was deeper than the lowstand shoreline, thus providing accommodation space for the lowstand deposits. Second, the proximity and relative size of sediment sources and the local coastal dispersal system influenced the geometry of the deposit. Mid-shelf and shelf-margin bodies composed of seaward-steepening downlapping reflectors were deposited as thin-to-thick continuous prograding sheets over an irregular eroded shelf surface and onto the shelf edge during the last fall and lowstand of sea level. A dearth of sediment at the end of lowstand conditions led to a switch from deposition to erosion. During sea level rise, shoreface erosion produced a major marine erosional (ravinement) surface landward of the 120-m isobath, and much, and in many places all, of the downlapping deposit was removed. Preservation of downlapping deposits is largely a function of their thickness. Thick continuous deposits are common on the shelf edge, whereas on the mid-shelf only thin remnants are preserved locally where depressions or morphologic steps were present in the shelf surface.
Henze, G. P.; Pless, S.; Petersen, A.; Long, N.; Scambos, A. T.
2014-02-01
Approaches are needed to continuously characterize the energy performance of commercial buildings to allow for (1) timely response to excess energy use by building operators; and (2) building occupants to develop energy awareness and to actively engage in reducing energy use. Energy information systems, often involving graphical dashboards, are gaining popularity in presenting energy performance metrics to occupants and operators in a (near) real-time fashion. Such an energy information system, called Building Agent, has been developed at NREL and incorporates a dashboard for public display. Each building is, by virtue of its purpose, location, and construction, unique. Thus, assessing building energy performance is possible only in a relative sense, as comparison of absolute energy use out of context is not meaningful. In some cases, performance can be judged relative to average performance of comparable buildings. However, in cases of high-performance building designs, such as NREL's Research Support Facility (RSF) discussed in this report, relative performance is meaningful only when compared to historical performance of the facility or to a theoretical maximum performance of the facility as estimated through detailed building energy modeling.
Lee, C.M.; Schock, H.J.
1988-01-01
Currently, the heat transfer equation used in the rotary combustion engine (RCE) simulation model is taken from piston engine studies. These relations have been empirically developed by the experimental input coming from piston engines whose geometry differs considerably from that of the RCE. The objective of this work was to derive equations to estimate heat transfer coefficients in the combustion chamber of an RCE. This was accomplished by making detailed temperature and pressure measurements in a direct injection stratified charge (DISC) RCE under a range of conditions. For each specific measurement point, the local gas velocity was assumed equal to the local rotor tip speed. Local physical properties of the fluids were then calculated. Two types of correlation equations were derived and are described in this paper. The first correlation expresses the Nusselt number as a function of the Prandtl number, Reynolds number, and characteristic temperature ratio; the second correlation expresses the forced convection heat transfer coefficient as a function of fluid temperature, pressure and velocity. 10 references.
Integral equation for gauge invariant quark Green's function
Sazdjian, H.
2008-08-29
We consider gauge invariant quark two-point Green's functions in which the gluonic phase factor follows a skew-polygonal line. Using a particular representation for the quark propagator in the presence of an external gluon field, functional relations between Green's functions with different numbers of segments of the polygonal lines are established. An integral equation is obtained for the Green's function having a phase factor along a single straight line. The related kernels involve Wilson loops with skew-polygonal contours and with functional derivatives along the sides of the contours.
New approach to folding with the Coulomb wave function
Blokhintsev, L. D.; Savin, D. A.; Kadyrov, A. S.; Mukhamedzhanov, A. M.
2015-05-15
Due to the long-range character of the Coulomb interaction theoretical description of low-energy nuclear reactions with charged particles still remains a formidable task. One way of dealing with the problem in an integral-equation approach is to employ a screened Coulomb potential. A general approach without screening requires folding of kernels of the integral equations with the Coulomb wave. A new method of folding a function with the Coulomb partial waves is presented. The partial-wave Coulomb function both in the configuration and momentum representations is written in the form of separable series. Each term of the series is represented as a product of a factor depending only on the Coulomb parameter and a function depending on the spatial variable in the configuration space and the momentum variable if the momentum representation is used. Using a trial function, the method is demonstrated to be efficient and reliable.
Fracture Characterization in Enhanced Geothermal Systems by Wellbore and Reservoir Analysis
Horne, Roland N.; Li, Kewen; Alaskar, Mohammed; Ames, Morgan; Co, Carla; Juliusson, Egill; Magnusdottir, Lilja
2012-06-30
This report highlights the work that was done to characterize fractured geothermal reservoirs using production data. That includes methods that were developed to infer characteristic functions from production data and models that were designed to optimize reinjection scheduling into geothermal reservoirs, based on these characteristic functions. The characterization method provides a robust way of interpreting tracer and flow rate data from fractured reservoirs. The flow-rate data are used to infer the interwell connectivity, which describes how injected fluids are divided between producers in the reservoir. The tracer data are used to find the tracer kernel for each injector-producer connection. The tracer kernel describes the volume and dispersive properties of the interwell flow path. A combination of parametric and nonparametric regression methods were developed to estimate the tracer kernels for situations where data is collected at variable flow-rate or variable injected concentration conditions. The characteristic functions can be used to calibrate thermal transport models, which can in turn be used to predict the productivity of geothermal systems. This predictive model can be used to optimize injection scheduling in a geothermal reservoir, as is illustrated in this report.
A comparison of methods for representing sparsely sampled random quantities.
Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel; Mullins, Joshua
2013-09-01
This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.
Development and testing of improved statistical wind power forecasting methods.
Mendes, J.; Bessa, R.J.; Keko, H.; Sumaili, J.; Miranda, V.; Ferreira, C.; Gama, J.; Botterud, A.; Zhou, Z.; Wang, J.
2011-12-06
(with spatial and/or temporal dependence). Statistical approaches to uncertainty forecasting basically consist of estimating the uncertainty based on observed forecasting errors. Quantile regression (QR) is currently a commonly used approach in uncertainty forecasting. In Chapter 3, we propose new statistical approaches to the uncertainty estimation problem by employing kernel density forecast (KDF) methods. We use two estimators in both offline and time-adaptive modes, namely, the Nadaraya-Watson (NW) and Quantilecopula (QC) estimators. We conduct detailed tests of the new approaches using QR as a benchmark. One of the major issues in wind power generation are sudden and large changes of wind power output over a short period of time, namely ramping events. In Chapter 4, we perform a comparative study of existing definitions and methodologies for ramp forecasting. We also introduce a new probabilistic method for ramp event detection. The method starts with a stochastic algorithm that generates wind power scenarios, which are passed through a high-pass filter for ramp detection and estimation of the likelihood of ramp events to happen. The report is organized as follows: Chapter 2 presents the results of the application of ITL training criteria to deterministic WPF; Chapter 3 reports the study on probabilistic WPF, including new contributions to wind power uncertainty forecasting; Chapter 4 presents a new method to predict and visualize ramp events, comparing it with state-of-the-art methodologies; Chapter 5 briefly summarizes the main findings and contributions of this report.
Topological fermionic string representation for Chern-Simons non-Abelian gauge theories
Botelho, L.C.L. )
1990-05-15
We show that loop wave equations in non-Abelian Chern-Simons gauge theory are exactly solved by a conformally invariant topological fermionic string theory.
The emergence of Electronic Democracy as an auxiliary to representational democracy
Noel, R.E.
1994-06-01
Electronic democracy as a system is defined, and the ways in which it may affect current systems of government is addressed. Electronic democracy`s achievements thus far in the United States at the community level are surveyed, and prospects for its expansion to state, national, and international systems are summarized. Central problems of electronic democracy are described, and its feasibility assessed (including safeguards against, and vulnerabilities to sabotage and abuse); the ways in which new and ongoing methods for information dissemination pose risks to current systems of government are discussed. One of electronic democracy`s underlying assumptions is challenged, namely that its direct, instant polling capability necessarily improves or refines governance. Further support is offered for the assertion that computer systems/networks should be used primarily to educate citizens and enhance awareness of issues, rather than as frameworks for direct decision making.
Smith, Curtis L.; Prescott, Steven; Kvarfordt, Kellie; Sampath, Ram; Larson, Katie
2015-09-01
Early in 2013, researchers at the Idaho National Laboratory outlined a technical framework to support the implementation of state-of-the-art probabilistic risk assessment to predict the safety performance of advanced small modular reactors. From that vision of the advanced framework for risk analysis, specific tasks have been underway in order to implement the framework. This report discusses the current development of a several tasks related to the framework implementation, including a discussion of a 3D physics engine that represents the motion of objects (including collision and debris modeling), cloud-based analysis tools such as a Bayesian-inference engine, and scenario simulations. These tasks were performed during 2015 as part of the technical work associated with the Advanced Reactor Technologies Program.
Compressible, multiphase semi-implicit method with moment of fluid interface representation
Jemison, Matthew; Sussman, Mark; Arienti, Marco
2014-09-16
A unified method for simulating multiphase flows using an exactly mass, momentum, and energy conserving Cell-Integrated Semi-Lagrangian advection algorithm is presented. The deforming material boundaries are represented using the moment-of-fluid method. Our new algorithm uses a semi-implicit pressure update scheme that asymptotically preserves the standard incompressible pressure projection method in the limit of infinite sound speed. The asymptotically preserving attribute makes the new method applicable to compressible and incompressible flows including stiff materials; enabling large time steps characteristic of incompressible flow algorithms rather than the small time steps required by explicit methods. Moreover, shocks are captured and material discontinuities are tracked, without the aid of any approximate or exact Riemann solvers. As a result, wimulations of underwater explosions and fluid jetting in one, two, and three dimensions are presented which illustrate the effectiveness of the new algorithm at efficiently computing multiphase flows containing shock waves and material discontinuities with large “impedance mismatch.”
Contacts for MicroBooNE plots and other data representations
U.S. Department of Energy (DOE) all webpages (Extended Search)
Contacts and Staff Contacts and Staff Contacts Contact the U.S. Department of Energy (DOE) Office of Indian Energy by mail, email, or phone. Washington, D.C. DOE Office of Indian Energy 1000 Independence Ave. SW Room 8E-060 Washington, D.C. 20585 Help Desk Phone: 720-356-1352 Email: tribal@ee.doe.gov Leadership Christopher Clark Deschene Director, Office of Indian Energy Policy and Programs Christopher Clark Deschene (Navajo Nation) is the director of the Office of Indian Energy. Mr. Deschene
Voisin, Nathalie; Li, Hongyi; Ward, Duane L.; Huang, Maoyi; Wigmosta, Mark S.; Leung, Lai-Yung R.
2013-09-30
Human influence on the hydrologic cycle includes regulation and storage, consumptive use and overall redistribution of water resources in space and time. Representing these processes is essential for applications of earth system models in hydrologic and climate predictions, as well as impact studies at regional to global scales. Emerging large-scale research reservoir models use generic operating rules that are flexible for coupling with earth system models. Those generic operating rules have been successful in reproducing the overall regulated flow at large basin scales. This study investigates the uncertainties of the reservoir models from different implementations of the generic operating rules using the complex multi-objective Columbia River Regulation System in northwestern United States as an example to understand their effects on not only regulated flow but also reservoir storage and fraction of the demand that is met. Numerical experiments are designed to test new generic operating rules that combine storage and releases targets for multi-purpose reservoirs and to compare the use of reservoir usage priorities, withdrawals vs. consumptive demand, as well as natural vs. regulated mean flow for calibrating operating rules. Overall the best performing implementation is the use of the combined priorities (flood control storage targets and irrigation release targets) operating rules calibrated with mean annual natural flow and mean monthly withdrawals. The challenge of not accounting for groundwater withdrawals, or on the contrary, assuming that all remaining demand is met through groundwater extractions, is discussed.
Schach Von Wittenau, Alexis E.
2003-01-01
A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.
Elsworth, Derek; Izadi, Ghazal; Gan, Quan; Fang, Yi; Taron, Josh; Sonnenthal, Eric
2015-07-28
This work has investigated the roles of effective stress induced by changes in fluid pressure, temperature and chemistry in contributing to the evolution of permeability and induced seismicity in geothermal reservoirs. This work has developed continuum models [1] to represent the progress or seismicity during both stimulation [2] and production [3]. These methods have been used to resolve anomalous observations of induced seismicity at the Newberry Volcano demonstration project [4] through the application of modeling and experimentation. Later work then focuses on the occurrence of late stage seismicity induced by thermal stresses [5] including the codifying of the timing and severity of such responses [6]. Furthermore, mechanistic linkages between observed seismicity and the evolution of permeability have been developed using data from the Newberry project [7] and benchmarked against field injection experiments. Finally, discontinuum models [8] incorporating the roles of discrete fracture networks have been applied to represent stimulation and then thermal recovery for new arrangements of geothermal wells incorporating the development of flow manifolds [9] in order to increase thermal output and longevity in EGS systems.
Representation of Solar Capacity Value in the ReEDS Capacity Expansion Model
Sigrin, B.; Sullivan, P.; Ibanez, E.; Margolis, R.
2014-03-01
An important issue for electricity system operators is the estimation of renewables' capacity contributions to reliably meeting system demand, or their capacity value. While the capacity value of thermal generation can be estimated easily, assessment of wind and solar requires a more nuanced approach due to the resource variability. Reliability-based methods, particularly assessment of the Effective Load-Carrying Capacity, are considered to be the most robust and widely-accepted techniques for addressing this resource variability. This report compares estimates of solar PV capacity value by the Regional Energy Deployment System (ReEDS) capacity expansion model against two sources. The first comparison is against values published by utilities or other entities for known electrical systems at existing solar penetration levels. The second comparison is against a time-series ELCC simulation tool for high renewable penetration scenarios in the Western Interconnection. Results from the ReEDS model are found to compare well with both comparisons, despite being resolved at a super-hourly temporal resolution. Two results are relevant for other capacity-based models that use a super-hourly resolution to model solar capacity value. First, solar capacity value should not be parameterized as a static value, but must decay with increasing penetration. This is because -- for an afternoon-peaking system -- as solar penetration increases, the system's peak net load shifts to later in the day -- when solar output is lower. Second, long-term planning models should determine system adequacy requirements in each time period in order to approximate LOLP calculations. Within the ReEDS model we resolve these issues by using a capacity value estimate that varies by time-slice. Within each time period the net load and shadow price on ReEDS's planning reserve constraint signals the relative importance of additional firm capacity.
Program for Converting C/C++ Texts to FrameMaker Representation
Energy Science and Technology Software Center (OSTI)
1995-06-23
Used to insert pieces of C/C=+code into FrameMaker documents with fontified Language tokens. Different fonts can be used for different language constructions such as float, while, etc.
Mixed Legendre moments and discrete scattering cross sections for anisotropy representation
Calloo, A.; Vidal, J. F.; Le Tellier, R.; Rimpault, G.
2012-07-01
This paper deals with the resolution of the integro-differential form of the Boltzmann transport equation for neutron transport in nuclear reactors. In multigroup theory, deterministic codes use transfer cross sections which are expanded on Legendre polynomials. This modelling leads to negative values of the transfer cross section for certain scattering angles, and hence, the multigroup scattering source term is wrongly computed. The first part compares the convergence of 'Legendre-expanded' cross sections with respect to the order used with the method of characteristics (MOC) for Pressurised Water Reactor (PWR) type cells. Furthermore, the cross section is developed using piecewise-constant functions, which better models the multigroup transfer cross section and prevents the occurrence of any negative value for it. The second part focuses on the method of solving the transport equation with the above-mentioned piecewise-constant cross sections for lattice calculations for PWR cells. This expansion thereby constitutes a 'reference' method to compare the conventional Legendre expansion to, and to determine its pertinence when applied to reactor physics calculations. (authors)
Scenarios of Building Energy Demand for China with a Detailed Regional Representation
Yu, Sha; Eom, Jiyong; Zhou, Yuyu; Evans, Meredydd; Clarke, Leon E.
2014-02-07
Building energy consumption currently accounts for 28% of Chinas total energy use and is expected to continue to grow induced by floorspace expansion, income growth, and population change. Fuel sources and building services are also evolving over time as well as across regions and building types. To understand sectoral and regional difference in building energy use and how socioeconomic, physical, and technological development influence the evolution of the Chinese building sector, this study developed a building energy use model for China downscaled into four climate regions under an integrated assessment framework. Three building types (rural residential, urban residential, and commercial) were modeled specifically in each climate region. Our study finds that the Cold and Hot Summer Cold Winter regions lead in total building energy use. The impact of climate change on heating energy use is more significant than that of cooling energy use in most climate regions. Both rural and urban households will experience fuel switch from fossil fuel to cleaner fuels. Commercial buildings will experience rapid growth in electrification and energy intensity. Improved understanding of Chinese buildings with climate change highlighted in this study will help policy makers develop targeted policies and prioritize building energy efficiency measures.
Compressible, multiphase semi-implicit method with moment of fluid interface representation
Jemison, Matthew; Sussman, Mark; Arienti, Marco
2014-09-16
A unified method for simulating multiphase flows using an exactly mass, momentum, and energy conserving Cell-Integrated Semi-Lagrangian advection algorithm is presented. The deforming material boundaries are represented using the moment-of-fluid method. Our new algorithm uses a semi-implicit pressure update scheme that asymptotically preserves the standard incompressible pressure projection method in the limit of infinite sound speed. The asymptotically preserving attribute makes the new method applicable to compressible and incompressible flows including stiff materials; enabling large time steps characteristic of incompressible flow algorithms rather than the small time steps required by explicit methods. Moreover, shocks are captured and material discontinuities aremore » tracked, without the aid of any approximate or exact Riemann solvers. As a result, wimulations of underwater explosions and fluid jetting in one, two, and three dimensions are presented which illustrate the effectiveness of the new algorithm at efficiently computing multiphase flows containing shock waves and material discontinuities with large “impedance mismatch.”« less
Ringrose, P.; Pickup, G.; Jensen, J.
1997-08-01
We have used a reservoir gridblock-sized outcrop (10m by 100m) of fluvio-deltaic sandstones to evaluate the importance of internal heterogeneity for a hypothetical waterflood displacement process. Using a dataset based on probe permeameter measurements taken from two vertical transacts representing {open_quotes}wells{close_quotes} (5cm sampling) and one {open_quotes}core{close_quotes} sample (exhaustive 1mm-spaced sampling), we evaluate the permeability variability at different lengthscales, the correlation characteristics (structure of the variogram, function), and larger-scale trends. We then relate these statistical measures to the sedimentology. We show how the sediment architecture influences the effective tensor permeability at the lamina and bed scale, and then calculate the effective relative permeability functions for a waterflood. We compare the degree of oil recovery from the formation: (a) using averaged borehole data and no geological structure, and (b) modelling the sediment architecture of the interwell volume using mixed stochastic/deterministic methods. We find that the sediment architecture has an important effect on flow performance, mainly due to bedscale capillary trapping and a consequent reduction in the effective oil mobility. The predicted oil recovery differs by 18% when these small-scale effects are included in the model. Traditional reservoir engineering methods, using averages permeability values, only prove acceptable in high-permeability and low-heterogeneity zones. The main outstanding challenge, represented by this illustration of sub-gridblock scale heterogeneity, is how to capture the relevant geological structure along with the inherent geo-statistical variability. An approach to this problem is proposed.
Gnome View: A tool for visual representation of human genome data
Pelkey, J.E.; Thomas, G.S.; Thurman, D.A.; Lortz, V.B.; Douthart, R.J.
1993-02-01
GnomeView is a tool for exploring data generated by the Human Gemone Project. GnomeView provides both graphical and textural styles of data presentation: employs an intuitive window-based graphical query interface: and integrates its underlying genome databases in such a way that the user can navigate smoothly across databases and between different levels of data. This paper describes GnomeView and discusses how it addresses various genome informatics issues.
Adiabatic representation in the three-body problem with Coulomb interaction
Vinitskii, S.I.; Ponomarev, L.I.
1982-11-01
An effective method for solving the three-body problem with Coulomb interaction is presented systematically. The essential feature of the method is an expansion of the wave function of the three-particle system with respect to an adiabatic basis and reduction of the original Schroedinger equation to a system of ordinary differential equations. Convergence of the adiabatic expansion is ensured not only by the smallness of the ratio of the particle masses but also by the smallness of the nondiagonal matrix elements of the kinetic-energy operator of particles of the same charge. The possibilities of the method are demonstrated by the example of the calculation of the energies and wave functions of all states of the ..mu..-mesic molecules of the hydrogen isotopes and the e/sup -/e/sup -/e/sup +/ system. The method is equally suitable for calculating the ground state and the excited states of a three-particle system. This is particularly important in the calculation of the energies of the weakly bound states of the mesic molecules dd..mu.. and dt..mu.., knowledge of which is needed to describe the processes of muonic catalysis of nuclear fusion reactions.
On representation of temporal variability in electricity capacity planning models
Merrick, James H.
2016-08-23
This study systematically investigates how to represent intra-annual temporal variability in models of optimum electricity capacity investment. Inappropriate aggregation of temporal resolution can introduce substantial error into model outputs and associated economic insight. The mechanisms underlying the introduction of this error are shown. How many representative periods are needed to fully capture the variability is then investigated. For a sample dataset, a scenario-robust aggregation of hourly (8760) resolution is possible in the order of 10 representative hours when electricity demand is the only source of variability. The inclusion of wind and solar supply variability increases the resolution of the robustmore » aggregation to the order of 1000. A similar scale of expansion is shown for representative days and weeks. These concepts can be applied to any such temporal dataset, providing, at the least, a benchmark that any other aggregation method can aim to emulate. Finally, how prior information about peak pricing hours can potentially reduce resolution further is also discussed.« less
Representation of the Solar Capacity Value in the ReEDS Capacity Expansion Model: Preprint
Sigrin, B.; Sullivan, P.; Ibanez, E.; Margolis, R.
2014-08-01
An important emerging issue is the estimation of renewables' contributions to reliably meeting system demand, or their capacity value. While the capacity value of thermal generation can be estimated easily, assessment of wind and solar requires a more nuanced approach due to resource variability. Reliability-based methods, particularly, effective load-carrying capacity (ELCC), are considered to be the most robust techniques for addressing this resource variability. The Regional Energy Deployment System (ReEDS) capacity expansion model and other long-term electricity capacity planning models require an approach to estimating CV for generalized PV and system configurations with low computational and data requirements. In this paper we validate treatment of solar photovoltaic (PV) capacity value by ReEDS capacity expansion model by comparing model results to literature for a range of energy penetration levels. Results from the ReEDS model are found to compare well with both comparisons--despite not being resolved at an hourly scale.
Scalable Association Rule Mining with Predicates on Semantic Representations of Data
Tsay, Li-Shiang; Sukumar, Sreenivas R; Roberts, Larry W
2015-01-01
Finding semantic associations from a vast amount of heterogeneous data is an important and useful task in various applications. We present a framework to extract semantic association patterns directly from a very large graph dataset without the extra step of converting graph data into transaction data.
Dispersive representation and shape of the K{sub l3} form factors: Robustness
Bernard, Veronique; Stern, Jan; Passemar, Emilie
2009-08-01
An accurate low-energy dispersive parametrization of the scalar K{pi} form factor was constructed some time ago in terms of a single parameter guided by the Callan-Treiman low-energy theorem. A similar twice-subtracted dispersive parametrization for the vector K{pi} form factor will be investigated here. The robustness of the parametrization of these two form factors will be studied in great detail. In particular the cutoff dependence, the isospin breaking effects, and the possible, though not highly probable, presence of zeros in the form factors will be discussed. Interesting constraints in the latter case will be obtained from the soft-kaon analog of the Callan-Treiman theorem and a comparison with the recent {tau}{yields}K{pi}{nu}{sub {tau}} data.
Knowledge representation and the application of case-based reasoning in engineering design
Bhangal, J.S.; Esat, I.
1996-12-31
This paper is an assessment of the requirements in the application of Case-based Reasoning to Engineering Design. The methods in which a CBR system will assist a designer when he/she is presented with a problem specification and the various methods which need to be understood before attempting to build an such expert system are discussed here. The problem is two fold, firstly the methods of utilizing CBR are varied and secondly the method of representing the knowledge in design also needs to be established. How a design represented basically differs for each application and this is a decision which needs to be made when setting up the case memory but the methods used are discussed here. CBR itself can also be utilized in various ways and it has been seen from previous applications that a hybrid approach can produce the best results.
McKone, T.E.; Bennett, D.H.
2002-08-01
In multimedia mass-balance models, the soil compartment is an important sink as well as a conduit for transfers to vegetation and shallow groundwater. Here a novel approach for constructing soil transport algorithms for multimedia fate models is developed and evaluated. The resulting algorithms account for diffusion in gas and liquid components; advection in gas, liquid, or solid phases; and multiple transformation processes. They also provide an explicit quantification of the characteristic soil penetration depth. We construct a compartment model using three and four soil layers to replicate with high reliability the flux and mass distribution obtained from the exact analytical solution describing the transient dispersion, advection, and transformation of chemicals in soil with fixed properties and boundary conditions. Unlike the analytical solution, which requires fixed boundary conditions, the soil compartment algorithms can be dynamically linked to other compartments (air, vegetation, ground water, surface water) in multimedia fate models. We demonstrate and evaluate the performance of the algorithms in a model with applications to benzene, benzo(a)pyrene, MTBE, TCDD, and tritium.
Request for Proposal No. DE-SOL-0007749 PART IV - REPRESENTATIONS...
National Nuclear Security Administration (NNSA)
... of the rights of the United States in inventions conceived or first actually reduced to ... rights of the Government in identified inventions, i.e., individual inventions conceived ...
The MC21 Monte Carlo Transport Code
Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H
2007-01-09
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.
Parallel performance of a preconditioned CG solver for unstructured finite element applications
Shadid, J.N.; Hutchinson, S.A.; Moffat, H.K.
1994-12-31
A parallel unstructured finite element (FE) implementation designed for message passing MIMD machines is described. This implementation employs automated problem partitioning algorithms for load balancing unstructured grids, a distributed sparse matrix representation of the global finite element equations and a parallel conjugate gradient (CG) solver. In this paper a number of issues related to the efficient implementation of parallel unstructured mesh applications are presented. These include the differences between structured and unstructured mesh parallel applications, major communication kernels for unstructured CG solvers, automatic mesh partitioning algorithms, and the influence of mesh partitioning metrics on parallel performance. Initial results are presented for example finite element (FE) heat transfer analysis applications on a 1024 processor nCUBE 2 hypercube. Results indicate over 95% scaled efficiencies are obtained for some large problems despite the required unstructured data communication.
Parallel performance of a preconditioned CG solver for unstructured finite element applications
Shadid, J.N.; Hutchinson, S.A.; Moffat, H.K.
1994-06-01
A parallel unstructured finite element (FE) implementation designed for message passing machines is described. This implementation employs automated problem partitioning algorithms for load balancing unstructured grids, a distributed sparse matrix representation of the global finite element equations and a parallel conjugate gradient (CG) solver. In this paper a number of issues related to the efficient implementation of parallel unstructured mesh applications are presented. These include the differences between structured and unstructured mesh parallel applications, major communication kernels for unstructured CG solvers, automatic mesh partitioning algorithms, and the influence of mesh. partitioning metrics on parallel performance. Initial results are presented for example finite element (FE) heat transfer analysis applications on a 1024 processor nCUBE 2 hypercube. Results indicate over 95% scaled efficiencies are obtained for some large problems despite the required unstructured data communication.
PRACTICAL METHOD FOR ESTIMATING NEUTRON CROSS SECTION COVARIANCES IN THE RESONANCE REGION
Cho, Y.S.; Oblozinsky, P.; Mughabghab,S.F.; Mattoon,C.M.; Herman,M.
2010-04-30
Recent evaluations of neutron cross section covariances in the resolved resonance region reveal the need for further research in this area. Major issues include declining uncertainties in multigroup representations and proper treatment of scattering radius uncertainty. To address these issues, the present work introduces a practical method based on kernel approximation using resonance parameter uncertainties from the Atlas of Neutron Resonances. Analytical expressions derived for average cross sections in broader energy bins along with their sensitivities provide transparent tool for determining cross section uncertainties. The role of resonance-resonance and bin-bin correlations is specifically studied. As an example we apply this approach to estimate (n,{gamma}) and (n,el) covariances for the structural material {sup 55}Mn.
Energy Science and Technology Software Center (OSTI)
2014-09-16
A simple code-generator to generate the low level code kernels used by the QPhiX Library for Lattice QCD. Generates Kernels for Wilson-Dslash, and Wilson-Clover kernels. Can be reused to write other optimized kernels for Intel Xeon Phi(tm), Intel Xeon(tm) and potentially other architectures.
Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks
Ziaul Huque
2007-08-31
This is the final technical report for the project titled 'Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks'. The aim of the project was to develop an efficient chemistry model for combustion simulations. The reduced chemistry model was developed mathematically without the need of having extensive knowledge of the chemistry involved. To aid in the development of the model, Neural Networks (NN) was used via a new network topology known as Non-linear Principal Components Analysis (NPCA). A commonly used Multilayer Perceptron Neural Network (MLP-NN) was modified to implement NPCA-NN. The training rate of NPCA-NN was improved with the GEneralized Regression Neural Network (GRNN) based on kernel smoothing techniques. Kernel smoothing provides a simple way of finding structure in data set without the imposition of a parametric model. The trajectory data of the reaction mechanism was generated based on the optimization techniques of genetic algorithm (GA). The NPCA-NN algorithm was then used for the reduction of Dimethyl Ether (DME) mechanism. DME is a recently discovered fuel made from natural gas, (and other feedstock such as coal, biomass, and urban wastes) which can be used in compression ignition engines as a substitute for diesel. An in-house two-dimensional Computational Fluid Dynamics (CFD) code was developed based on Meshfree technique and time marching solution algorithm. The project also provided valuable research experience to two graduate students.
Yadava, G; Imai, Y; Hsieh, J
2014-06-15
Purpose: Quantitative accuracy of Iodine Hounsfield Unit (HU) in conventional single-kVp scanning is susceptible to beam-hardening effect. Dual-energy CT has unique capabilities of quantification using monochromatic CT images, but this scanning mode requires the availability of the state-of-the-art CT scanner and, therefore, is limited in routine clinical practice. Purpose of this work was to develop a beam-hardening-correction (BHC) for single-kVp CT that can linearize Iodine projections at any nominal energy, apply this approach to study Iodine response with respect to keV, and compare with dual-energy based monochromatic images obtained from material-decomposition using 80kVp and 140kVp. Methods: Tissue characterization phantoms (Gammex Inc.), containing solid-Iodine inserts of different concentrations, were scanned using GE multi-slice CT scanner at 80, 100, 120, and 140 kVp. A model-based BHC algorithm was developed where Iodine was estimated using re-projection of image volume and corrected through an iterative process. In the correction, the re-projected Iodine was linearized using a polynomial mapping between monochromatic path-lengths at various nominal energies (40 to 140 keV) and physically modeled polychromatic path-lengths. The beam-hardening-corrected 80kVp and 140kVp images (linearized approximately at effective energy of the beam) were used for dual-energy material-decomposition in Water-Iodine basis-pair followed by generation of monochromatic images. Characterization of Iodine HU and noise in the images obtained from singlekVp with BHC at various nominal keV, and corresponding dual-energy monochromatic images, was carried out. Results: Iodine HU vs. keV response from single-kVp with BHC and dual-energy monochromatic images were found to be very similar, indicating that single-kVp data may be used to create material specific monochromatic equivalent using modelbased projection linearization. Conclusion: This approach may enable quantification of Iodine contrast enhancement and potential reduction in injected contrast without using dual-energy scanning. However, in general, dual-energy scanning has unique value in material characterization and quantification, and its value cannot be discounted. GE Healthcare Employee.
Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.
2013-08-21
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time t{sub i} (trajectory positions and velocities x{sub i} = (r{sub i}, v{sub i})) to time t{sub i+1} (x{sub i+1}) by x{sub i+1} = f{sub i}(x{sub i}), the dynamics problem spanning an interval from t{sub 0}t{sub M} can be transformed into a root finding problem, F(X) = [x{sub i} ? f(x{sub (i?1})]{sub i} {sub =1,M} = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H{sub 2}O AIMD simulation at the MP2 level. The maximum speedup ((serial execution time)/(parallel execution time) ) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H{sub 2}O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.
Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.
2013-08-21
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f , (e.g. Verlet algorithm) is available to propagate the system from time ti (trajectory positions and velocities xi = (ri; vi)) to time ti+1 (xi+1) by xi+1 = fi(xi), the dynamics problem spanning an interval from t0 : : : tM can be transformed into a root finding problem, F(X) = [xi - f (x(i-1)]i=1;M = 0, for the trajectory variables. The root finding problem is solved using a variety of optimization techniques, including quasi-Newton and preconditioned quasi-Newton optimization schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed and the effectiveness of various approaches to solving the root finding problem are tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl+4H2O AIMD simulation at the MP2 level. The maximum speedup obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow TCP/IP networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl+4H2O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. By using these algorithms we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 seconds per time step to 6.9 seconds per time step.
Hagos, Samson M.; Leung, Lai-Yung R.
2012-11-01
Cloud resolving model simulations and vector analysis are used to develop a quantitative method of assessing regional variations in the relationships between various large-scale environmental variables and the transition to deep convection. Results of the CRM simulations from three tropical regions are used to cluster environmental conditions under which transition to deep convection does and does not take place. Projections of the large-scale environmental variables on the difference between these two clusters are used to quantify the roles of these variables in the transition to deep convection. While the transition to deep convection is most sensitive to moisture and vertical velocity perturbations, the details of the profiles of the anomalies vary from region to region. In comparison, the transition to deep convection is found to be much less sensitive to temperature anomalies over all three regions. The vector formulation presented in this study represents a simple general framework for quantifying various aspects of how the transition to deep convection is sensitive to environmental conditions.
Sathaye, J.; Xu, T.; Galitsky, C.
2010-08-15
Adoption of efficient end-use technologies is one of the key measures for reducing greenhouse gas (GHG) emissions. How to effectively analyze and manage the costs associated with GHG reductions becomes extremely important for the industry and policy makers around the world. Energy-climate (EC) models are often used for analyzing the costs of reducing GHG emissions for various emission-reduction measures, because an accurate estimation of these costs is critical for identifying and choosing optimal emission reduction measures, and for developing related policy options to accelerate market adoption and technology implementation. However, accuracies of assessing of GHG-emission reduction costs by taking into account the adoption of energy efficiency technologies will depend on how well these end-use technologies are represented in integrated assessment models (IAM) and other energy-climate models.
Shepherd, Jason; Mitchell, Scott A.; Jankovich, Steven R.; Benzley, Steven E.
2007-05-15
The present invention provides a meshing method, called grafting, that lifts the prior art constraint on abutting surfaces, including surfaces that are linking, source/target, or other types of surfaces of the trunk volume. The grafting method locally modifies the structured mesh of the linking surfaces allowing the mesh to conform to additional surface features. Thus, the grafting method can provide a transition between multiple sweep directions extending sweeping algorithms to 23/4-D solids. The method is also suitable for use with non-sweepable volumes; the method provides a transition between meshes generated by methods other than sweeping as well.
Larson, D.C.; Fu, C.Y.; Hetrick, D.M.
1987-01-01
The model code TNG has been extensively used in evaluation work of structural materials for ENDF/B-VI performed at Oak Ridge National Laboratory. A new aspect of ENDF/B-VI is the use of File 6 formats for energy-angle correlated data. Such data are generally calculated, anchored by experimental data. In this informal note we outline how the TNG results are calculated and entered in the File 6 formats. 4 refs.
Bond-Lamberty, Benjamin; Calvin, Katherine V.; Jones, Andrew D.; Mao, Jiafu; Patel, Pralit L.; Shi, Xiaoying; Thomson, Allison M.; Thornton, Peter E.; Zhou, Yuyu
2014-01-01
Human activities are significantly altering biogeochemical cycles at the global scale, posing a significant problem for earth system models (ESMs), which may incorporate static land-use change inputs but do not actively simulate policy or economic forces. One option to address this problem is a to couple an ESM with an economically oriented integrated assessment model. Here we have implemented and tested a coupling mechanism between the carbon cycles of an ESM (CLM) and an integrated assessment (GCAM) model, examining the best proxy variables to share between the models, and quantifying our ability to distinguish climate- and land-use-driven flux changes. CLMs net primary production and heterotrophic respiration outputs were found to be the most robust proxy variables by which to manipulate GCAMs assumptions of long-term ecosystem steady state carbon, with short-term forest production strongly correlated with long-term biomass changes in climate-change model runs. By leveraging the fact that carbon-cycle effects of anthropogenic land-use change are short-term and spatially limited relative to widely distributed climate effects, we were able to distinguish these effects successfully in the model coupling, passing only the latter to GCAM. By allowing climate effects from a full earth system model to dynamically modulate the economic and policy decisions of an integrated assessment model, this work provides a foundation for linking these models in a robust and flexible framework capable of examining two-way interactions between human and earth system processes.
Representation of a complex Green function on a real basis: Generalization to a three-body system
Li Tieniu; Shakeshaft, Robin; Piraux, Bernard
2003-05-01
We develop further a new method for employing a set of real basis functions to represent the Green function at energies in the continuum, without regard for the asymptotic boundary conditions. The method is based on the analyticity of the Green function with respect to its underlying time scale. The diagonalization of large matrices is unnecessary. Although a large complex symmetric linear system of equations must be solved, this can be done with high stability and efficiency by using a generalization of the Cholesky decomposition of real positive definite symmetric matrices. We present results of test applications to {sup 1}S-wave electron scattering from a hydrogen atom and photodetachment of the negative hydrogen ion. The extension from two- to three-body collisions entails the use of projection operators to distinguish different groups of asymptotic channels.
Pekney, Natalie J.; Cheng, Hanqi; Small, Mitchell J.
2015-11-05
Abstract: The objective of the current work was to develop a statistical method and associated tool to evaluate the impact of oil and natural gas exploration and production activities on local air quality.
SMART (Sandia's Modular Architecture for Robotics and Teleoperation) Ver. 1.0
Energy Science and Technology Software Center (OSTI)
2009-12-15
behaviors. Each module must have at a minimum an initialization routine, a parameter adjustment routine, and an update routine. The SMART runtime kernel runs continuously within a real-time embedded system. Each module is first set-up by the kernel, initialized, and then updated at a fixed rate whenever it is in context. The kernel responds to operator directed commands by changing the state of the system, changing parameters on individual modules, and switching behavioral modes. The SMART Editor is a tool used to define, verify, configure and generate source code for a SMART control system. It uses icon representations of the modules, code patches from valid configurations of the modules, and configuration files describing how a module can be connected into a system to lead the end-user in through the steps needed to create a final system. The SMART Supervisor serves as an interface to a SMART run-time system. It provides an interface on a host computer that connects to the embedded system via TCPIIP ASCII commands. It utilizes a scripting language (Tel) and a graphics windowing environment (Tk). This system can either be customized to fit an end-user's needs or completely replaced as needed.« less
U-232: Xen p2m_teardown() Bug Lets Local Guest OS Users Deny...
unavailable and may cause the domain 0 kernel to panic. There is no requirement for memory sharing to be in use. Impact: A guest kernel can cause the host to become unresponsive...
Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report ?? Phase I
Mark S. Schmalz
2011-07-24
Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G} for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient
Energy Science and Technology Software Center (OSTI)
2005-02-15
This patch provides an API that facilitates developing kernel modules that function as NMI (nonmaskable interrupt) handlers.
Energy Science and Technology Software Center (OSTI)
002977WKSTN00 libMSR library and msr-safe kernel module https://github.com/scalability-llnl/libmsr
2D stochastic-integral models for characterizing random grain noise in titanium alloys
Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Cherry, Matthew; Pilchak, Adam; Knopp, Jeremy S.; Blodgett, Mark P.
2014-02-18
We extend our previous work, in which we applied high-dimensional model representation (HDMR) and analysis of variance (ANOVA) concepts to the characterization of a metallic surface that has undergone a shot-peening treatment to reduce residual stresses, and has, therefore, become a random conductivity field. That example was treated as a onedimensional problem, because those were the only data available. In this study, we develop a more rigorous two-dimensional model for characterizing random, anisotropic grain noise in titanium alloys. Such a model is necessary if we are to accurately capture the 'clumping' of crystallites into long chains that appear during the processing of the metal into a finished product. The mathematical model starts with an application of the Karhunen-Love (K-L) expansion for the random Euler angles, ? and ?, that characterize the orientation of each crystallite in the sample. The random orientation of each crystallite then defines the stochastic nature of the electrical conductivity tensor of the metal. We study two possible covariances, Gaussian and double-exponential, which are the kernel of the K-L integral equation, and find that the double-exponential appears to satisfy measurements more closely of the two. Results based on data from a Ti-7Al sample will be given, and further applications of HDMR and ANOVA will be discussed.
Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; Vetter, Jeffrey S.
2015-07-13
Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less
MEASURING THE MASS DISTRIBUTION IN GALAXY CLUSTERS
Geller, Margaret J.; Diaferio, Antonaldo; Rines, Kenneth J.; Serra, Ana Laura E-mail: diaferio@ph.unito.it E-mail: serra@to.infn.it
2013-02-10
Cluster mass profiles are tests of models of structure formation. Only two current observational methods of determining the mass profile, gravitational lensing, and the caustic technique are independent of the assumption of dynamical equilibrium. Both techniques enable the determination of the extended mass profile at radii beyond the virial radius. For 19 clusters, we compare the mass profile based on the caustic technique with weak lensing measurements taken from the literature. This comparison offers a test of systematic issues in both techniques. Around the virial radius, the two methods of mass estimation agree to within {approx}30%, consistent with the expected errors in the individual techniques. At small radii, the caustic technique overestimates the mass as expected from numerical simulations. The ratio between the lensing profile and the caustic mass profile at these radii suggests that the weak lensing profiles are a good representation of the true mass profile. At radii larger than the virial radius, the extrapolated Navarro, Frenk and White fit to the lensing mass profile exceeds the caustic mass profile. Contamination of the lensing profile by unrelated structures within the lensing kernel may be an issue in some cases; we highlight the clusters MS0906+11 and A750, superposed along the line of sight, to illustrate the potential seriousness of contamination of the weak lensing signal by these unrelated structures.
Fernando Lolas; Carolina Valdebenito; Eduardo Rodrguez; Irene Schiattino; Adelio Misseroni
2007-07-09
The effects of genetic knowledge beyond the scientific community depend on processes of social construction of risks and benefits, or perils and possibilities, which are different in different communities. In a globalized world, new developments affect societies not capable of technically replicating them and unaware of the very nature of the scientific process. Moral and legal consequences, however, diffuse rapidly and involve groups and persons with scant or no knowledge about the way scientific concepts are developed or perfected. Leading genomics researchers view their field as developing after a sharp break with that worldwide social movement of the 20s and 30s known as eugenics and its most radical expression in the Nazi efforts to destroy life not worth living. Manipulation, prejudice and mistrust, however, pervade non-expert accounts of current research. Researchers claim that the new knowledge will have a positive impact on medicine and serve as a foundation for informed social policy. Both types of applications depend on informed communities of non-scientists (physicians, policymakers), whose members may well differ on what constitutes burden and what is benefit, depending upon professional socialization and cultural bias. ELSI projects associated with genomic research are notable for the lack of minorities involved and for the absence of comparative analysis of data reception in different world communities. It may be contended also that the critical potential of philosophical or ethical analyses is reduced by their being situated within the scientific process itself and carried out by members of the expert community, thus reducing independence of judgment. The majority of those involved in such studies, by tradition, experience, and formative influences, share the same worldview about the nature of moral dilemmas or the feasibility of intended applications. The global effects of new knowledge when combined with other cultural or religious traditions are thus unknown. These effects are interesting on two accounts. First, even if underdeveloped countries cannot replicate the technical aspects of research, their influence on social practices is not kept within geographical or language barriers. The way they are handled in developed countries may become part of resistances to ethical imperialism. Second, these advances have economic consequences. Their full understanding and the creation of a scientific literacy essential for sound ethical analysis demand the creation of receptive capacity in developing countries. The morality of genomics research and its applications can be analyzed from two main vantage points. Some traditions stress the ethics of convictions (in Max Webers terms, Gesinnungsethik) while others rely on the ethics of responsibility (Verantwortungsethik). In different forms, the latter deals with the consequences of social action, scientific research in this case, and may or may not be related to utilitarian considerations. It may be hypothesized that convictions, mostly of a religious nature, dominate the argumentative preferences in Latin countries and continental European traditions which rely on virtues while responsibility is associated with a discourse based on rights prevalent in countries following the Anglo-Saxon pattern of thought. This finds expression in different legal systems (common law versus codes) and in the language used for deliberation and moral reasoning. Although results of US-based ELSI research may not be transferable to other cultural and economic contexts, they impact other societies and serve as models. Rarely do they apply completely in other settings. In a globalized world, both appropriate understanding of the scientific enterprise and its ethical or economic sustainability demand empirical analysis of the patterns of thought, main beliefs, and reactions toward the new knowledge and its applications. Anecdotal accounts show that expectations may be misleading and inadequate knowledge prevents appropriate appraisal of burdens and benefits in different societies.
Soares, M.B.; Fatima Bonaldo, M. de
1998-12-08
This invention provides a method to normalize a cDNA library comprising: (a) constructing a directionally cloned library containing cDNA inserts wherein the insert is capable of being amplified by polymerase chain reaction; (b) converting a double-stranded cDNA library into single-stranded DNA circles; (c) generating single-stranded nucleic acid molecules complementary to the single-stranded DNA circles converted in step (b) by polymerase chain reaction with appropriate primers; (d) hybridizing the single-stranded DNA circles converted in step (b) with the complementary single-stranded nucleic acid molecules generated in step (c) to produce partial duplexes to an appropriate Cot; and (e) separating the unhybridized single-stranded DNA circles from the hybridized DNA circles, thereby generating a normalized cDNA library. This invention also provides a method to normalize a cDNA library wherein the generating of single-stranded nucleic acid molecules complementary to the single-stranded DNA circles converted in step (b) is by excising cDNA inserts from the double-stranded cDNA library; purifying the cDNA inserts from cloning vectors; and digesting the cDNA inserts with an exonuclease. This invention further provides a method to construct a subtractive cDNA library following the steps described above. This invention further provides normalized and/or subtractive cDNA libraries generated by the above methods. 25 figs.
Aylward, Frank O.; Khadempour, Lily; Tremmel, Daniel; McDonald, Bradon R.; Nicora, Carrie D.; Wu, Si; Moore, Ronald J.; Orton, Daniel J.; Monroe, Matthew E.; Piehowski, Paul D.; Purvine, Samuel O.; Smith, Richard D.; Lipton, Mary S.; Burnum-Johnson, Kristin E.; Currie, Cameron R.
2015-08-28
Leaf-cutter ants are prolific and conspicuous Neotropical herbivores that derive energy from specialized fungus gardens they cultivate using foliar biomass. The basidiomycetous cultivar of the ants, Leucoagaricus gongylophorus, produces specialized hyphal swellings called gongylidia that serve as the primary food source of ant colonies. Gongylidia also contain lignocellulases that become concentrated in ant digestive tracts and are deposited within fecal droplets onto fresh foliar material as it is foraged by the ants. Although the enzymes concentrated by L. gongylophorus within gongylidia are thought to be critical to the initial degradation of plant biomass, only a few enzymes present in these hyphal swellings have been identified. Here we use proteomic methods to identify proteins present in the gongylidia of three Atta cephalotes colonies. Our results demonstrate that a diverse but consistent set of enzymes is present in gongylidia, including numerous lignocellulases likely involved in the degradation of polysaccharides, plant toxins, and proteins. Overall, gongylidia contained over three-quarters of all lignocellulases identified in the L. gongylophorus genome, demonstrating that the majority of the enzymes produced by this fungus for biomass breakdown are ingested by the ants. We also identify a set of 23 lignocellulases enriched in gongylidia compared to whole fungus garden samples, suggesting that certain enzymes may be particularly important in the initial degradation of foliar material. Our work sheds light on the complex interplay between leaf-cutter ants and their fungal symbiont that allows for the host insects to occupy an herbivorous niche by indirectly deriving energy from plant biomass.
Soares, Marcelo Bento; Bonaldo, Maria de Fatima
1998-01-01
This invention provides a method to normalize a cDNA library comprising: (a) constructing a directionally cloned library containing cDNA inserts wherein the insert is capable of being amplified by polymerase chain reaction; (b) converting a double-stranded cDNA library into single-stranded DNA circles; (c) generating single-stranded nucleic acid molecules complementary to the single-stranded DNA circles converted in step (b) by polymerase chain reaction with appropriate primers; (d) hybridizing the single-stranded DNA circles converted in step (b) with the complementary single-stranded nucleic acid molecules generated in step (c) to produce partial duplexes to an appropriate Cot; and (e) separating the unhybridized single-stranded DNA circles from the hybridized DNA circles, thereby generating a normalized cDNA library. This invention also provides a method to normalize a cDNA library wherein the generating of single-stranded nucleic acid molecules complementary to the single-stranded DNA circles converted in step (b) is by excising cDNA inserts from the double-stranded cDNA library; purifying the cDNA inserts from cloning vectors; and digesting the cDNA inserts with an exonuclease. This invention further provides a method to construct a subtractive cDNA library following the steps described above. This invention further provides normalized and/or subtractive cDNA libraries generated by the above methods.
Aylward, Frank O.; Khadempour, Lily; Tremmel, Daniel M.; McDonald, Bradon R.; Nicora, Carrie D.; Wu, Si; Moore, Ronald J.; Orton, Daniel J.; Monroe, Matthew E.; Piehowski, Paul D.; et al
2015-08-28
Leaf-cutter ants are prolific and conspicuous constituents of Neotropical ecosystems that derive energy from specialized fungus gardens they cultivate using prodigious amounts of foliar biomass. The basidiomycetous cultivar of the ants, Leucoagaricus gongylophorus, produces specialized hyphal swellings called gongylidia that serve as the primary food source of ant colonies. Gongylidia also contain plant biomass-degrading enzymes that become concentrated in ant digestive tracts and are deposited within fecal droplets onto fresh foliar material as ants incorporate it into the fungus garden. Although the enzymes concentrated by L. gongylophorus within gongylidia are thought to be critical to the initial degradation of plantmore » biomass, only a few enzymes present in these hyphal swellings have been identified. Here we use proteomic methods to identify proteins present in the gongylidia of three Atta cephalotes colonies. Our results demonstrate that a diverse but consistent set of enzymes is present in gongylidia, including numerous plant biomass-degrading enzymes likely involved in the degradation of polysaccharides, plant toxins, and proteins. Overall, gongylidia contained over three quarters of all biomass-degrading enzymes identified in the L. gongylophorus genome, demonstrating that the majority of the enzymes produced by this fungus for biomass breakdown are ingested by the ants. We also identify a set of 40 of these enzymes enriched in gongylidia compared to whole fungus garden samples, suggesting that certain enzymes may be particularly important in the initial degradation of foliar material. Our work sheds light on the complex interplay between leaf-cutter ants and their fungal symbiont that allows for the host insects to occupy an herbivorous niche by indirectly deriving energy from plant biomass.« less
Fritzen, M.R.; Fritzen, T.A.
1994-12-31
Anytime that blasting operations will be conducted near existing inhabited structures, vibration damage claims are a major concern of the blasting contractor. It has been the authors` experience that even when vibration and airblast levels generated from a blast are well below accepted damage thresholds, damage claims can still arise. The single greatest source of damage claims is the element of surprise associated with not knowing that blasting operations are being conducted nearby. The second greatest source of damage claims arise form the inability to produce accurate and detailed records of all blasting activity which provides evidence that vibration and air blast levels from each blast had been taken by seismic recording equipment. Using a two part plan consisting of extensive public relations followed by a detailed and accurate monitoring and recording of blasting operations has resulted in no substantiated claims of damage since its` incorporation. The authors experience shows that by using this two part process when conducting blasting operations near inhabited structures, unsubstantiated blast vibration damage claims may be significantly reduced.
Revision of laser-induced damage threshold evaluation from damage probability data
Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas; Melninkaitis, Andrius
2013-04-15
In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametric regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).
U-096: IBM AIX TCP Large Send Offload Bug Lets Remote Users Deny Service
A remote user can send a series of specially crafted TCP packets to trigger a kernel panic on the target system.
U.S. Department of Energy (DOE) all webpages (Extended Search)
... Pool Design Engineering David Rose Nuclear & Criticality ... Unload Cask Unload Pebbles Digest Carbon Kernels wresidual ... serviced by an overhead crane, with equipment operated ...
TH-A-9A-06: Inverse Planning of Gamma Knife Radiosurgery Using...
Office of Scientific and Technical Information (OSTI)
obtained by solving a constrained integer-linear problem. (4) The shots are placed into ... Subject: 60 APPLIED LIFE SCIENCES; ALGORITHMS; GEOMETRY; KERNELS; NEOPLASMS; OPTIMIZATION; ...
February 2011 Monthly News Roundup | U.S. DOE Office of Science...
... The kernel utilizes a high precision clock synchronization algorithm developed by the Colony team to provide federated nodes with a sufficient global time source for the required ...
PROPERTIES OF SEQUENTIAL CHROMOSPHERIC BRIGHTENINGS AND ASSOCIATED...
Office of Scientific and Technical Information (OSTI)
We report on the physical properties of solar sequential chromospheric brightenings ... (kernels) of solar flares and associated SCBs using high-resolution Halpha images. ...
T-726:Linux-2.6 privilege escalation/denial of service/information leak
Vulnerabilities have been discovered in the Linux kernel that may lead to a privilege escalation, denial of service or information leak.
Federal Finance Facilities Available for Energy Efficiency Upgrades
Office of Environmental Management (EM)
... from corn kernel starch), hemicelluloses, lignin, waste materials, biogas, butanol, diesel-equivalent fuel, sugarcane, and nonfood crops such as poplar trees or switchgrass. ...
Neutronics Studies of Uranium-bearing Fully Ceramic Micro-encapsulated...
Office of Scientific and Technical Information (OSTI)
particle design features (e.g., kernel diameter, coating layer thicknesses, and packing fraction) to understand the impact on reactivity and resulting operating cycle length. ...
Energy Science and Technology Software Center (OSTI)
003089IBMPC00 ACRA-II: Kernel Integration Code System for Estimation of Radiation Doses Caused by a Hypothetical Reactor Accident
Multithreaded Global Address Space Communication Techniques
U.S. Department of Energy (DOE) all webpages (Extended Search)
Multithreaded Global Address Space Communication Techniques for Gyrokinetic Fusion ... of high communication work loads of the underlying kernel among OpenMP threads. ...
Manycore Performance-Portability: Kokkos Multidimensional Array Library
Edwards, H. Carter; Sunderland, Daniel; Porter, Vicki; Amsler, Chris; Mish, Sam
2012-01-01
Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel executionmore » performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].« less
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
... Carbothermic Synthesis of 820- m UN Kernels. Investigation of Process Variables Lindemer, Terrence ; Silva, Chinthaka M ; Henry, Jr, John James ; McMurray, Jake W ; Jolly, Brian C ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
of performance between the SP and UCLA's own Appleseed cluster of G4 Macintosh computers will also be presented. Kernel and Application Code Performance for a Spectral...
Energy Science and Technology Software Center (OSTI)
1992-01-17
This software provides a portable benchmark suite for real time kernels. It tests the performance of many of the system calls, as well as the interrupt response time and task response time to interrupts. These numbers provide a baseline for comparing various real-time kernels and hardware platforms.
Energy Science and Technology Software Center (OSTI)
2015-11-06
SSCA1-K1 is a parallel implementation of kernel 1 of the SSCA1 benchmark suite released by the DARPA HPCS program. This kernel is able to run in parallel on a distributed shared memory system at extreme scales using OpenSHMEM.
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia; Yadav, Vineet; Michalak, Anna M.; van Bloemen Waanders, Bart Gustaaf; McKenna, Sean Andrew
2013-09-01
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions which can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.
THETRIS: A MICRO-SCALE TEMPERATURE AND GAS RELEASE MODEL FOR TRISO FUEL
J. Ortensi; A.M. Ougouag
2011-12-01
The dominating mechanism in the passive safety of gas-cooled, graphite-moderated, high-temperature reactors (HTRs) is the Doppler feedback effect. These reactor designs are fueled with sub-millimeter sized kernels formed into TRISO particles that are imbedded in a graphite matrix. The best spatial and temporal representation of the feedback effect is obtained from an accurate approximation of the fuel temperature. Most accident scenarios in HTRs are characterized by large time constants and slow changes in the fuel and moderator temperature fields. In these situations a meso-scale, pebble and compact scale, solution provides a good approximation of the fuel temperature. Micro-scale models are necessary in order to obtain accurate predictions in faster transients or when parameters internal to the TRISO are needed. Since these coated particles constitute one of the fundamental design barriers for the release of fission products, it becomes important to understand the transient behavior inside this containment system. An explicit TRISO fuel temperature model named THETRIS has been developed and incorporated into the CYNOD-THERMIX-KONVEK suite of coupled codes. The code includes gas release models that provide a simple predictive capability of the internal pressure during transients. The new model yields similar results to those obtained with other micro-scale fuel models, but with the added capability to analyze gas release, internal pressure buildup, and effects of a gap in the TRISO. The analyses show the instances when the micro-scale models improve the predictions of the fuel temperature and Doppler feedback. In addition, a sensitivity study of the potential effects on the transient behavior of high-temperature reactors due to the presence of a gap is included. Although the formation of a gap occurs under special conditions, its consequences on the dynamic behavior of the reactor can cause unexpected responses during fast transients. Nevertheless, the strong
Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.
2015-02-01
In support of fully ceramic matrix (FCM) fuel development, coating development work has begun at the Oak Ridge National Laboratory (ORNL) to produce tri-isotropic (TRISO) coated fuel particles with UN kernels. The nitride kernels are used to increase heavy metal density in these SiC-matrix fuel pellets with details described elsewhere. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO_{2} and UC_{x}) kernels. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions were required to maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels.
Real time gamma-ray signature identifier
Rowland, Mark; Gosnell, Tom B.; Ham, Cheryl; Perkins, Dwight; Wong, James
2012-05-15
A real time gamma-ray signature/source identification method and system using principal components analysis (PCA) for transforming and substantially reducing one or more comprehensive spectral libraries of nuclear materials types and configurations into a corresponding concise representation/signature(s) representing and indexing each individual predetermined spectrum in principal component (PC) space, wherein an unknown gamma-ray signature may be compared against the representative signature to find a match or at least characterize the unknown signature from among all the entries in the library with a single regression or simple projection into the PC space, so as to substantially reduce processing time and computing resources and enable real-time characterization and/or identification.
Quantifying Changes in Building Electricity Use, with Application to Demand Response
Mathieu, Johanna L.; Price, Phillip N.; Kiliccote, Sila; Piette, Mary Ann
2010-11-17
We present methods for analyzing commercial and industrial facility 15-minute-interval electric load data. These methods allow building managers to better understand their facility's electricity consumption over time and to compare it to other buildings, helping them to ask the right questions to discover opportunities for demand response, energy efficiency, electricity waste elimination, and peak load management. We primarily focus on demand response. Methods discussed include graphical representations of electric load data, a regression-based electricity load model that uses a time-of-week indicator variable and a piecewise linear and continuous outdoor air temperature dependence, and the definition of various parameters that characterize facility electricity loads and demand response behavior. In the future, these methods could be translated into easy-to-use tools for building managers.
Gender Trends in Radiation Oncology in the United States: A 30-Year Analysis
Ahmed, Awad A.; Egleston, Brian; Holliday, Emma; Eastwick, Gary; Takita, Cristiane; Jagsi, Reshma
2014-01-01
Purpose: Although considerable research exists regarding the role of women in the medical profession in the United States, little work has described the participation of women in academic radiation oncology. We examined women's participation in authorship of radiation oncology literature, a visible and influential activity that merits specific attention. Methods and Materials: We examined the gender of first and senior US physician-authors of articles published in the Red Journal in 1980, 1990, 2000, 2004, 2010, and 2012. The significance of trends over time was evaluated using logistic regression. Results were compared with female representation in journals of general medicine and other major medical specialties. Findings were also placed in the context of trends in the representation of women among radiation oncology faculty and residents over the past 3 decades, using Association of American Medical Colleges data. Results: The proportion of women among Red Journal first authors increased from 13.4% in 1980 to 29.7% in 2012, and the proportion among senior authors increased from 3.2% to 22.6%. The proportion of women among radiation oncology full-time faculty increased from 11% to 26.7% from 1980 to 2012. The proportion of women among radiation oncology residents increased from 27.1% to 33.3% from 1980 to 2010. Conclusions: Female first and senior authorship in the Red Journal has increased significantly, as has women's participation among full-time faculty, but women remain underrepresented among radiation oncology residents compared with their representation in the medical student body. Understanding such trends is necessary to develop appropriately targeted interventions to improve gender equity in radiation oncology.
VHTR Prismatic Super Lattice Model for Equilibrium Fuel Cycle Analysis
G. S. Chang
2006-09-01
The advanced Very High Temperature gas-cooled Reactor (VHTR), which is currently being developed, achieves simplification of safety through reliance on innovative features and passive systems. One of the VHTRs innovative features is the reliance on ceramic-coated fuel particles to retain the fission products under extreme accident conditions. The effect of the random fuel kernel distribution in the fuel prismatic block is addressed through the use of the Dancoff correction factor in the resonance treatment. However, if the fuel kernels are not perfect black absorbers, the Dancoff correction factor is a function of burnup and fuel kernel packing factor, which requires that the Dancoff correction factor be updated during Equilibrium Fuel Cycle (EqFC) analysis. An advanced Kernel-by-Kernel (K-b-K) hexagonal super lattice model can be used to address and update the burnup dependent Dancoff effect during the EqFC analysis. The developed Prismatic Super Homogeneous Lattice Model (PSHLM) is verified by comparing the calculated burnup characteristics of the double-heterogeneous Prismatic Super Kernel-by-Kernel Lattice Model (PSK-b-KLM). This paper summarizes and compares the PSHLM and PSK-b-KLM burnup analysis study and results. This paper also discusses the coupling of a Monte-Carlo code with fuel depletion and buildup code, which provides the fuel burnup analysis tool used to produce the results of the VHTR EqFC burnup analysis.
NLO BFKL and anomalous dimensions of light-ray operators
Balitsky, Ian
2013-05-01
This presentation covers: Regge limit in the coordinate space; BFKL representation of 4-point correlation function in N = 4 SYM; light-ray operators; DGLAP representation of 4-point correlation function; and anomalous dimensions from DGAP vs BFKL representations.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-09-28
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
Aylward, Frank O.; Khadempour, Lily; Tremmel, Daniel M.; McDonald, Bradon R.; Nicora, Carrie D.; Wu, Si; Moore, Ronald J.; Orton, Daniel J.; Monroe, Matthew E.; Piehowski, Paul D.; Purvine, Samuel O.; Smith, Richard D.; Lipton, Mary S.; Burnum-Johnson, Kristin E.; Currie, Cameron R.; Brady, Sean
2015-08-28
Leaf-cutter ants are prolific and conspicuous constituents of Neotropical ecosystems that derive energy from specialized fungus gardens they cultivate using prodigious amounts of foliar biomass. The basidiomycetous cultivar of the ants, Leucoagaricus gongylophorus, produces specialized hyphal swellings called gongylidia that serve as the primary food source of ant colonies. Gongylidia also contain plant biomass-degrading enzymes that become concentrated in ant digestive tracts and are deposited within fecal droplets onto fresh foliar material as ants incorporate it into the fungus garden. Although the enzymes concentrated by L. gongylophorus within gongylidia are thought to be critical to the initial degradation of plant biomass, only a few enzymes present in these hyphal swellings have been identified. Here we use proteomic methods to identify proteins present in the gongylidia of three Atta cephalotes colonies. Our results demonstrate that a diverse but consistent set of enzymes is present in gongylidia, including numerous plant biomass-degrading enzymes likely involved in the degradation of polysaccharides, plant toxins, and proteins. Overall, gongylidia contained over three quarters of all biomass-degrading enzymes identified in the L. gongylophorus genome, demonstrating that the majority of the enzymes produced by this fungus for biomass breakdown are ingested by the ants. We also identify a set of 40 of these enzymes enriched in gongylidia compared to whole fungus garden samples, suggesting that certain enzymes may be particularly important in the initial degradation of foliar material. Our work sheds light on the complex interplay between leaf-cutter ants and their fungal symbiont that allows for the host insects to occupy an herbivorous niche by indirectly deriving energy from plant biomass.
Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.; Ivanov, K. N.
2012-07-01
A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied. (authors)
Quantitative comparison of noise texture across CT scanners from different manufacturers
Solomon, Justin B.; Christianson, Olav; Samei, Ehsan
2012-10-15
Purpose: To quantitatively compare noise texture across computed tomography (CT) scanners from different manufacturers using the noise power spectrum (NPS). Methods: The American College of Radiology CT accreditation phantom (Gammex 464, Gammex, Inc., Middleton, WI) was imaged on two scanners: Discovery CT 750HD (GE Healthcare, Waukesha, WI), and SOMATOM Definition Flash (Siemens Healthcare, Germany), using a consistent acquisition protocol (120 kVp, 0.625/0.6 mm slice thickness, 250 mAs, and 22 cm field of view). Images were reconstructed using filtered backprojection and a wide selection of reconstruction kernels. For each image set, the 2D NPS were estimated from the uniform section of the phantom. The 2D spectra were normalized by their integral value, radially averaged, and filtered by the human visual response function. A systematic kernel-by-kernel comparison across manufacturers was performed by computing the root mean square difference (RMSD) and the peak frequency difference (PFD) between the NPS from different kernels. GE and Siemens kernels were compared and kernel pairs that minimized the RMSD and |PFD| were identified. Results: The RMSD (|PFD|) values between the NPS of GE and Siemens kernels varied from 0.01 mm{sup 2} (0.002 mm{sup -1}) to 0.29 mm{sup 2} (0.74 mm{sup -1}). The GE kernels 'Soft,''Standard,''Chest,' and 'Lung' closely matched the Siemens kernels 'B35f,''B43f,''B41f,' and 'B80f' (RMSD < 0.05 mm{sup 2}, |PFD| < 0.02 mm{sup -1}, respectively). The GE 'Bone,''Bone+,' and 'Edge' kernels all matched most closely with Siemens 'B75f' kernel but with sizeable RMSD and |PFD| values up to 0.18 mm{sup 2} and 0.41 mm{sup -1}, respectively. These sizeable RMSD and |PFD| values corresponded to visually perceivable differences in the noise texture of the images. Conclusions: It is possible to use the NPS to quantitatively compare noise texture across CT systems. The degree to which similar texture across scanners could be achieved varies and is
Scalar and tensor spherical harmonics expansion of the velocity...
Office of Scientific and Technical Information (OSTI)
The representation theory of the rotation group is applied to construct a series expansion ... by anisotropic turbulence; representation theory parametrises this dependence by a tensor ...
Microsoft Word - DOE Comment Letter.FINAL.doc
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
6 GEORGETOWN UNIVERSITY LAW CENTER INSTITUTE FOR PUBLIC REPRESENTATION 600 New Jersey ... The Institute for Public Representation ("IPR") is a public interest law firm and clinical ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
RFPs relating to construction projects require additional documentation and are located in the "Construction Specific Forms" tab. Representations and Certifications Representations ...
A Monte-Carlo method for ex-core neutron response
Gamino, R.G.; Ward, J.T.; Hughes, J.C.
1997-10-01
A Monte Carlo neutron transport kernel capability primarily for ex-core neutron response is described. The capability consists of the generation of a set of response kernels, which represent the neutron transport from the core to a specific ex-core volume. This is accomplished by tagging individual neutron histories from their initial source sites and tracking them throughout the problem geometry, tallying those that interact in the geometric regions of interest. These transport kernels can subsequently be combined with any number of core power distributions to determine detector response for a variety of reactor Thus, the transport kernels are analogous to an integrated adjoint response. Examples of pressure vessel response and ex-core neutron detector response are provided to illustrate the method.
U.S. Department of Energy (DOE) all webpages (Extended Search)
corrected, cloud-free surface reflectances over a 16-day period with a semi-empirical kernel-driven BRDF model to characterize the surface anisotropy and albedo. Only...
V-226: HP StoreOnce D2D Backup Systems Denial of Service Vulnerability...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
to version 2.3.0 or 1.2.19. Addthis Related Articles U-226: Linux Kernel SFC Driver TCP MSS Option Handling Denial of Service Vulnerability V-062: Asterisk Two Denial of...
U.S. Department of Energy (DOE) all webpages (Extended Search)
miniFE miniFE Description FE is a Finite Element mini-application which implements a couple of kernels representative of implicit finite-element applications. It assembles a sparse...
2013 PDSF User Meeting Minutes
U.S. Department of Energy (DOE) all webpages (Extended Search)
and Tony have been doing a rolling upgrade of the kernel and GPFS on the computes. Benchmarking is online. You can find the link here: PDSF Monitoring cvmfs is available via...
Higher order Fokker-Planck operators
Pomraning, G.C.
1996-11-01
If the scattering interaction in linear particle transport problems is highly peaked about zero momentum transfer, a common and often useful approximation is the replacement of the integral scattering operator with the differential Fokker-Planck operator. This operator involves a first derivative in energy and second derivatives in angle. In this paper, higher order Fokker-Planck scattering operators are derived, involving higher derivatives in both energy and angle. The applicability of these higher order differential operators to representative scattering kernels is discussed. It is shown that, depending upon the details of the scattering kernel in the integral operator, higher order Fokker-Planck approximations may or may not be valid. Even the classic low-order Fokker-Planck operator fails as an approximation for certain highly peaked scattering kernels. In particular, no Fokker-Planck operator is a valid approximation for scattering involving the widely used Henyey-Greenstein scattering kernel.
Energy Science and Technology Software Center (OSTI)
2012-09-12
The Kokkos Array library implements shared-memory array data structures and parallel task dispatch interfaces for data-parallel computational kernels that are performance-portable to multicore-CPU and manycore-accelerator (e.g., GPGPU) devices.
OpenCL_hands_on_intro_sc14_final_no_zoo_slides.pptx
U.S. Department of Energy (DOE) all webpages (Extended Search)
code cl::Program program(context, KernelSource, true); Example: vector addition * The "hello world" program of data parallel programming is a program to add two vectors Ci ...
Davis, Anthony B.; Xu, Feng; Collins, William D.
2015-03-01
Atmospheric hyperspectral VNIR sensing struggles with sub-pixel variability of clouds and limited spectral resolution mixing molecular lines. Our generalized radiative transfer model addresses both issues with new propagation kernels characterized by power-law decay in space.
OpenCL_hands_on_intro_sc14_final_no_zoo_slides.pptx
U.S. Department of Energy (DOE) all webpages (Extended Search)
... types: image2dt, image3dt and samplert Vector Types * The OpenCL C kernel programming language provides a set of vector instructions: - These are portable between different ...
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
... Access to the storage device is blocked by a kernel filter driver, except exclusive access is granted to a first anti-virus engine. The first anti-virus engine is directed to scan ...
Architecture for removable media USB-ARM (Patent) | SciTech Connect
Office of Scientific and Technical Information (OSTI)
Access to the storage device is blocked by a kernel filter driver, except exclusive access is granted to a first anti-virus engine. The first anti-virus engine is directed to scan ...
Using multivariate analyses to compare subsets of electrodes...
Office of Scientific and Technical Information (OSTI)
reduced to 15. Partial least squares regression models relating electrochemical ... Additionally, for each sugar, interval partial least squares regression successfully ...
Brown, D; Danielewicz, P
2002-03-15
This is the manual for a collection of programs that can be used to invert angled-averaged (i.e. one dimensional) two-particle correlation functions. This package consists of several programs that generate kernel matrices (basically the relative wavefunction of the pair, squared), programs that generate test correlation functions from test sources of various types and the program that actually inverts the data using the kernel matrix.
OpenCL_hands_on_intro_sc14_final_no_zoo_slides.pptx
U.S. Department of Energy (DOE) all webpages (Extended Search)
→ global → local and back Context and Command-Queues * Context: - The environment within which kernels execute and in which synchronization and memory management is defined. * The context includes: - One or more devices - Device memory - One or more command-queues * All commands for a device (kernel execution, synchronization, and memory operations) are submitted through a command-queue. * Each command-queue points to a single device within a context. Queue Context Device Device Memory
THMC Modeling of EGS Reservoirs Continuum through Discontinuum
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Representations: Capturing Reservoir Stimulation, Evolution and Induced Seismicity | Department of Energy THMC Modeling of EGS Reservoirs Continuum through Discontinuum Representations: Capturing Reservoir Stimulation, Evolution and Induced Seismicity THMC Modeling of EGS Reservoirs Continuum through Discontinuum Representations: Capturing Reservoir Stimulation, Evolution and Induced Seismicity THMC Modeling of EGS Reservoirs Continuum through Discontinuum Representations: Capturing
Li, Chao ..; Singh, Vijay P.; Mishra, Ashok K.
2013-02-06
This paper presents an improved brivariate mixed distribution, which is capable of modeling the dependence of daily rainfall from two distinct sources (e.g., rainfall from two stations, two consecutive days, or two instruments such as satellite and rain gauge). The distribution couples an existing framework for building a bivariate mixed distribution, the theory of copulae and a hybrid marginal distribution. Contributions of the improved distribution are twofold. One is the appropriate selection of the bivariate dependence structure from a wider admissible choice (10 candidate copula families). The other is the introduction of a marginal distribution capable of better representing low to moderate values as well as extremes of daily rainfall. Among several applications of the improved distribution, particularly presented here is its utility for single-site daily rainfall simulation. Rather than simulating rainfall occurrences and amounts separately, the developed generator unifies the two processes by generalizing daily rainfall as a Markov process with autocorrelation described by the improved bivariate mixed distribution. The generator is first tested on a sample station in Texas. Results reveal that the simulated and observed sequences are in good agreement with respect to essential characteristics. Then, extensive simulation experiments are carried out to compare the developed generator with three other alternative models: the conventional two-state Markov chain generator, the transition probability matrix model and the semi-parametric Markov chain model with kernel density estimation for rainfall amounts. Analyses establish that overall the developed generator is capable of reproducing characteristics of historical extreme rainfall events and is apt at extrapolating rare values beyond the upper range of available observed data. Moreover, it automatically captures the persistence of rainfall amounts on consecutive wet days in a relatively natural and easy way
Transient Eddy Current Response Due to a Subsurface Crack in a Conductive Plate
Fangwei Fu
2006-08-09
Eddy current nondestructive evaluation (NDE) is usually carried out by exciting a time harmonic field using an inductive probe. However, a viable alternative is to use transient eddy current NDE in which a current pulse in a driver coil produces a transient .eld in a conductor that decays at a rate dependent on the conductivity and the permeability of the material and the coil configuration. By using transient eddy current, it is possible to estimate the properties of the conductive medium and to locate and size potential .aws from the measured probe response. The fundamental study described in this dissertation seeks to establish a theoretical understanding of the transient eddy current NDE. Compared with the Fourier transform method, the derived analytical formulations are more convenient when the transient eddy current response within a narrow time range is evaluated. The theoretical analysis provides a valuable tool to study the effect of layer thickness, location of defect, crack opening as well as the optimization of probe design. Analytical expressions have been developed to evaluate the transient response due to eddy currents in a conductive plate based on two asymptotic series. One series converges rapidly for a short time regime and the other for a long time regime and both of them agree with the results calculated by fast Fourier transform over all the times considered. The idea of asymptotic expansion is further applied to determine the induced electromotive force (EMF) in a pick-up coil due to eddy currents in a cylindrical rod. Starting from frequency domain representation, a quasi-static time domain dyadic Green's function for an electric source in a conductive plate has been derived. The resulting expression has three parts; a free space term, multiple image terms and partial reflection terms. The dyadic Green's function serves as the kernel of an electric field integral equation which defines the interaction of an ideal crack with the transient
Automatic detection of sweep-meshable volumes
Tautges; Timothy J. , White; David R.
2006-05-23
A method of and software for automatically determining whether a mesh can be generated by sweeping for a representation of a geometric solid comprising: classifying surface mesh schemes for surfaces of the representation locally using surface vertex types; grouping mappable and submappable surfaces of the representation into chains; computing volume edge types for the representation; recursively traversing surfaces of the representation and grouping the surfaces into source, target, and linking surface lists; and checking traversal direction when traversing onto linking surfaces.
Kirov, Assen S.; Caravelli, Gregory; Palm, Aasa; Chui, Chen; LoSasso, Thomas
2006-10-15
The higher sensitivity to low-energy scattered photons of radiographic film compared to water can lead to significant dosimetric error when the beam quality varies significantly within a field. Correcting for this artifact will provide greater accuracy for intensity modulated radiation therapy (IMRT) verification dosimetry. A procedure is developed for correction of the film energy-dependent response by creating a pencil beam kernel within our treatment planning system to model the film response specifically. Film kernels are obtained from EGSnrc Monte Carlo simulations of the dose distribution from a 1 mm diameter narrow beam in a model of the film placed at six depths from 1.5 to 40 cm in polystyrene and solid water phantoms. Kernels for different area phantoms (50x50 cm{sup 2} and 25x25 cm{sup 2} polystyrene and 30x30 cm{sup 2} solid water) are produced. The Monte Carlo calculated kernel is experimentally verified with film, ion chamber and thermoluminescent dosimetry (TLD) measurements in polystyrene irradiated by a narrow beam. The kernel is then used in convolution calculations to predict the film response in open and IMRT fields. A 6 MV photon beam and Kodak XV2 film in a polystyrene phantom are selected to test the method as they are often used in practice and can result in large energy-dependent artifacts. The difference in dose distributions calculated with the film kernel and the water kernel is subtracted from film measurements to obtain a practically film artifact free IMRT dose distribution for the Kodak XV2 film. For the points with dose exceeding 5 cGy (11% of the peak dose) in a large modulated field and a film measurement inside a large polystyrene phantom at depth of 10 cm, the correction reduces the fraction of pixels for which the film dose deviates from dose to water by more than 5% of the mean film dose from 44% to 6%.
Kersaudy, Pierric; Sudret, Bruno; Varsier, Nadège; Picon, Odile; Wiart, Joe
2015-04-01
In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representation of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.
Zeng, Wei; Sjöberg, Magnus; Reuss, David L.; Hu, Zongjie
2016-06-01
Implementing spray-guided stratified-charge direct-injection spark-ignited (DISI) engines is inhibited by the occurrence of misfire and partial burns. Engine-performance tests demonstrate that increasing engine speed induces combustion instability, but this deterioration can be prevented by generating swirling flow during the intake stroke. In-cylinder pressure-based heat-release analysis reveals that the appearance of poor-burn cycles is not solely dependent on the variability of early flame-kernel growth. Moreover, cycles can experience burning-rate regression during later combustion stages and may or may not recover before the end of the cycle. Thermodynamic analysis and optical diagnostics are used here to clarify why swirl improves the combustionmore » repeatability from cycle to cycle. The fluid dynamics of swirl/spray interaction was previously demonstrated using high-speed PIV measurements of in-cylinder motored flow. It was found that the sprays of the multi-hole injector redistribute the intake-generated swirl flow momentum, thereby creating a better-centered higher angular-momentum vortex with reduced variability. The engine operation with high swirl was found to have significant improvement in cycle-to-cycle variations of both flow pattern and flow momentum. This paper is an extension of the previous work. Here, PIV measurements and flame imaging are applied to fired operation for studying how the swirl flow affects variability of ignition and subsequent combustion phases. PIV results for fired operation are consistent with the measurements made of motored flow. They demonstrate that the spark-plasma motion is highly correlated with the direction of the gas flow in the vicinity of the spark-plug gap. Without swirl, the plasma is randomly stretched towards either side of the spark plug, causing variability in the ignition of the two spray plumes that are straddling the spark plug. Conversely, swirl flow always convects the spark plasma towards one
Advanced Triso fuels with zirconium carbide for high temperature reactors
Lobach, Sergiy Y.; Knight, Travis W.; Jacob, Norman P.; Athon, Clifton E.
2007-07-01
There are several options for the advanced TRISO fuel: one is primarily replacement SiC with ZrC and the other is a concept involving a thin ZrC layer coating on the kernel, which is then enclosed in usual TRISO coatings. An effort at modeling, fabrication and testing of an advanced TRISO coated UO{sub 2} fuel particle design incorporating an added layer of ZrC over the fuel kernel is under investigation. The objectives of the coated particle development program are to define the essentials of a production route for the manufacture of kernels and coated particles and to identify the important process parameters that determine the particle properties. Still, the integrity of the ZrC coating is important, but not the main goal. The primary purpose of a ZrC coating examination in this study is to determine hot it serves as an oxygen getter to limit CO production and hence pressure buildup that would stress coatings leading to failure. This additional ZrC coating also aids in retaining fission products within the kernel, and carbon diffusion in the particle is limited hence kernel migration rates are slowed. The combined result being that failure rates of coated particles should decrease. (authors)
Estimates of Refrigerator Loads in Public Housing Based on Metered Consumption Data
Miller, JD; Pratt, RG
1998-09-11
The New York Power Authority (NYPA), the New York City Housing Authority (NYCHA), and the U.S. Departments of Housing and Urban Development (HUD) and Energy (DOE) have joined in a project to replace refrigerators in New York City public housing with new, highly energy-efficient models. This project laid the ground work for the Consortium for Energy Efficiency (CEE) and DOE to enable housing authorities throughout the United States to bulk-purchase energy-efficient appliances. DOE helped develop and plan the program through the ENERGY STAR@ Partnerships program conducted by its Pacific Nofiwest National Laboratory (PNNL). PNNL was subsequently asked to conduct the savings evahations for 1996 and 1997. PNNL designed the metering protocol and occupant survey, supplied and calibrated the metering equipment, and managed and analyzed the data. The 1996 metering study of refrigerator energy usage in New York City public housing (Pratt and Miller 1997) established the need and justification for a regression-model-based approach to an energy savings estimate. The need originated in logistical difficulties associated with sampling the population and pen?orming a stratified analysis. Commonly, refrigerators[a) with high representation in the population were missed in the sampling schedule, leaving significant holes in the sample and difficulties for the stratified anrdysis. The just{jfcation was found in the fact that strata (distinct groups of identical refrigerators) were not statistically distinct in terms of their label ratio (ratio of metered consumption to label rating). This finding suggested a general regression model could be used to represent the consumption of all refrigerators in the population. In 1996 a simple two-coefficient regression model, a function of only the refrigerator label rating, was developed and used to represent the existing population of refrigerators. A key concept used in the 1997 study grew from findings in a small number of apartments
Bose-Einstein condensation on a manifold with non-negative Ricci curvature
Akant, Levent Ertuğrul, Emine Tapramaz, Ferzan Turgut, O. Teoman
2015-01-15
The Bose-Einstein condensation for an ideal Bose gas and for a dilute weakly interacting Bose gas in a manifold with non-negative Ricci curvature is investigated using the heat kernel and eigenvalue estimates of the Laplace operator. The main focus is on the nonrelativistic gas. However, special relativistic ideal gas is also discussed. The thermodynamic limit of the heat kernel and eigenvalue estimates is taken and the results are used to derive bounds for the depletion coefficient. In the case of a weakly interacting gas, Bogoliubov approximation is employed. The ground state is analyzed using heat kernel methods and finite size effects on the ground state energy are proposed. The justification of the c-number substitution on a manifold is given.
Preparation of UC0.07-0.10N0.90-0.93 spheres for TRISO coated fuel particles
Collins, Jack Lee; Hunt, Rodney Dale; Johnson, Jared A; Silva, Chinthaka M; Lindemer, Terrence
2014-01-01
The U.S. Department of Energy is considering a new nuclear fuel, which should be much more impervious during a loss of coolant accident. The fuel would consist of tristructural isotropic coated particles with dense uranium nitride (UN) kernels. The objectives of this effort are to make uranium oxide microspheres with adequately dispersed carbon nanoparticles and to convert these microspheres into UN kernels. Recent improvements to internal gelation process were successfully applied to the production of uranium gel spheres with different concentrations of carbon black. After the spheres were washed, a simple, two-step heat profile was used to produce kernels with a chemical composition of UC0.07 0.10N0.90 0.93. The first step involved heating the microspheres to 2023 K in a vacuum, and in the second step, the microspheres were held at 1873 K for 6 hrs in nitrogen.
Nonlinear rescaling principle and entropy-like prox-methods in constrained optimization
Polyak, B.; Teboulle, M.
1994-12-31
The Nonlinear Rescaling Principle (NRP) consists of transforming the objective function and/or constraints into an equivalent problem and using the Classical Lagrangean (CL) of it for both theoretical analysis and numerical methods. The transformation is parametrized by a positive scalar parameter. The methods, based on NRP, consist of sequential unconstrained optimization of the CL for the equivalent problem in primal space and Lagrange multipliers update, using the current minimizer, while the parameter can be fixed or can be changed from step to step. It turns out that such a method is nothing but the prox-method with entropy like kernel for the dual problem. The entropy-like kernel is defined by the Legendre transformation of the scaling function. We will consider the convergence of both the NRP and prox-method with entropy-like kernel and also we will discuss some particular realization of NRP for Linear and Nonlinear Programming.
Exchange-correlation potentials in the adiabatic connection fluctuation-dissipation framework
Niquet, Y. M.; Fuchs, M.; Gonze, X.
2003-09-01
We provide the expression of the exchange-correlation potential in the adiabatic connection fluctuation-dissipation (ACFD) framework, for arbitrary time-dependent (TD) kernels. We investigate the asymptotic behavior of the ACFD potential in three relevant approximations: the random-phase approximation, the exact-exchange kernel in two-electron systems, and the adiabatic local-density approximation. We show that these potentials have the expected -1/r+Q/r{sup 3}-{alpha}/(2r{sup 4}) tail (in closed-shell systems with spherical symmetry), where Q and {alpha} depend on the TD kernel and reflect the physics included in each approximation. We also discuss approximate ACFD potentials that are much simpler to compute than the exact ones while being likely of reasonable accuracy.
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
... Maciejewski, Anthony A (5) Powers, Sarah S (5) Save Results Save this search to My Library ... a representation describing the data pattern, and saving the data and the representation. ...
RETScreen International Clean Energy Project Analysis Tool |...
Open Energy Information (Open El) [EERE & EIA]
URI: cleanenergysolutions.orgcontentretscreen-international-clean-energy- Language: String representation "English,Arabic, ... Urdu,Vietnamese" is too long. Policies:...
Document: NA Actionee: Dorothy Riehie Document Date: 03/09/2011...
U.S. Department of Energy (DOE) all webpages (Extended Search)
... of notes, letters, correspondence, letters of transmittal, facsimile transmittals, emails, messages, checks, graphic representations, films, photographs, videotape, diaries, ...
Video occupant detection and classification
Krumm, John C.
1999-01-01
A system for determining when it is not safe to arm a vehicle airbag by storing representations of known situations as observed by a camera at a passenger seat; and comparing a representation of a camera output of the current situation to the stored representations to determine the known situation most closely represented by the current situation. In the preferred embodiment, the stored representations include the presence or absence of a person or infant seat in the front passenger seat of an automobile.
Conduit - Scientific Data Exchange Library for HPC Simulations
Energy Science and Technology Software Center (OSTI)
2014-10-22
Conduit is a C++ software library that helps software developers with data representation and data exchange in scientific simulations
Towards the Understanding of Induced Seismicity in Enhanced Geothermal...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Continuum through Discontinuum Representations: Capturing Reservoir Stimulation, Evolution and Induced Seismicity Microearthquake Technology for EGS Fracture Characterization
Alternative Energy Consultants | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
Consultants Jump to: navigation, search Name: Alternative Energy Consultants Place: Texas Sector: Biofuels, Renewable Energy Product: String representation "Alternative Ene ......
Hydrocarbon Technologies | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
Technologies Jump to: navigation, search Name: Hydrocarbon Technologies Place: Lawrenceville, New Jersey Zip: 8648 Sector: Efficiency Product: String representation...
Encap Development LLC | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
navigation, search Name: Encap Development LLC Place: Massachusetts Zip: 17200 Sector: Efficiency, Renewable Energy, Services, Solar Product: String representation "encap...
The Ashlawn Group LLC | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
and technical consulting services, sales representations, product development, design and manufacturing process engineering solutions for industrial applications for the Department...
EERE PowerPoint 97-2004 Template: Green Version
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
THMC Modeling of EGS Reservoirs - Continuum through Discontinuum Representations: Capturing Reservoir Stimulation, Evolution and Induced Seismicity Derek Elsworth Pennsylvania ...
California Sunrise Alternative Energy Development LLC | Open...
Open Energy Information (Open El) [EERE & EIA]
Zip: 93505 Sector: Services Product: String representation "California Sunr ... g and lighting." is too long. References: California Sunrise Alternative Energy...
Improved test method to verify the power rating of a photovoltaic...
Office of Scientific and Technical Information (OSTI)
It presents the results of an evaluation of each method based upon regression analysis of ... MEASURING METHODS; PERFORMANCE; REGRESSION ANALYSIS; STANDARDS; CERTIFICATION Word ...
Collaborative Project. 3D Radiative Transfer Parameterization...
Office of Scientific and Technical Information (OSTI)
out by means of the multiple linear regression analysis associated with topographic ... We derived five regression equations with high statistical correlations for flux ...
Effect of temperature and CO2 concentration on laser-induced...
Office of Scientific and Technical Information (OSTI)
Power law regression was used to fit laboratory Na LIBS calibration data for sodium ... DIOXIDE; CONCENTRATION RATIO; GLASS INDUSTRY; FURNACES; EXHAUST GASES; REGRESSION ANALYSIS
Bounded limit for the Monte Carlo point-flux-estimator
Grimesey, R.A.
1981-01-01
In a Monte Carlo random walk the kernel K(R,E) is used as an expected value estimator at every collision for the collided flux phi/sub c/ r vector,E) at the detector point. A limiting value for the kernel is derived from a diffusion approximation for the probability current at a radius R/sub 1/ from the detector point. The variance of the collided flux at the detector point is thus bounded using this asymptotic form for K(R,E). The bounded point flux estimator is derived. (WHK)
Executing application function calls in response to an interrupt
Almasi, Gheorghe; Archer, Charles J.; Giampapa, Mark E.; Gooding, Thomas M.; Heidelberger, Philip; Parker, Jeffrey J.
2010-05-11
Executing application function calls in response to an interrupt including creating a thread; receiving an interrupt having an interrupt type; determining whether a value of a semaphore represents that interrupts are disabled; if the value of the semaphore represents that interrupts are not disabled: calling, by the thread, one or more preconfigured functions in dependence upon the interrupt type of the interrupt; yielding the thread; and if the value of the semaphore represents that interrupts are disabled: setting the value of the semaphore to represent to a kernel that interrupts are hard-disabled; and hard-disabling interrupts at the kernel.
U.S. Department of Energy (DOE) all webpages (Extended Search)
MG2 team) ! ! Cray Quarterly Meeting! July 22, 2015 NESAP CESM MG2 Update CESM NESAP MG2 Team Members * NCAR: John Dennis, Chris K err, S ean S antos * Cray: M arcus W agner * Intel: N adezhda P lotnikova, M artyn C orden * NERSC L iaison: Helen H e --- 2 --- MG2 Kernel * MG2 i s a k ernel f or C ESM t hat r epresents i ts radia>ve transfer w orkload. T ypically c onsumes a bout 1 0% o f C ESM run > me. - Brought t o D ungeon S ession i n M arch * Kernel is core bound - Not b andwidth l
U.S. Department of Energy (DOE) all webpages (Extended Search)
14 September 14 PDSF Users Meeting 9/14/10 Attending: Eric and Jay from PDSF and users Jeff P., Joanna and Marjorie. Cluster status: Cluster is well utilized, primarily by STAR and ALICE. Discussed ALICE memory requirements of 4GB for now. Outages: Some problems with jobs using up kernel buffers - mainly ALICE - which requires a reboot. The fix has been identified (kernel patch) and is being done. Upcoming downtimes: Nothing scheduled but will do new home and common at some point. New hardware:
U.S. Department of Energy (DOE) all webpages (Extended Search)
QPhiX Case Study QPhiX Case Study June 20, 2016 Background QPhiX [1,2,3] is a library optimized for Intel(R) manycore architectures and provides sparse solvers and slash kernels for Lattice QCD calculations. It supports the Wilson dslash operator with and without clover term as well as Conjugate Gradient [4] and BiCGStab [5] solvers. The main task for QPhiX is to solve the sparse linear system dirac equation where the Dslash kernel is defined by wilson dslash Here, U are complex, special
U.S. Department of Energy (DOE) all webpages (Extended Search)
Agenda Agenda Location: Berkeley Lab and NERSC OSF (All events are available for remote access over the web). Dates: Feb. 23-26, 2015 Monday, Feb. 23 - New User and Data Analytics Training NERSC (Berkeley Lab Building 943), 415 20th Street, Oakland, CA Tuesday, Feb. 24 - Science and Technology Day Berkeley Lab Building 50 Auditorium Wednesday, Feb. 25 - Advanced HPC Training: Kernel Tuning Hack-a-thon Learn to how to tune a kernel using your own (up to ~ a few 100s of lines) or one of our
Energy Science and Technology Software Center (OSTI)
2004-05-01
CHOS is a framework that facilitates concurrently supporting multiple Linux distributions on a single system. This is primarily aCComplished using the change root (chroot) method built into the Linux kernel. However, CHOS provides an additional kernel module that allows the same chroot file structure to be Used for multiple systems. The software also includes ulilities to switch into the chroot environment from the command-line and a pluggable authentication module (PAM) allowing transparent startup of themore » chroot environment for users,« less
U.S. Department of Energy (DOE) all webpages (Extended Search)
Memory Considerations Memory Considerations Overview Carver login nodes each have 48GB of physical memory. Most compute nodes have 24GB; however, 80 compute nodes have 48GB. Not all of this memory is available to user processes. Some memory is reserved for the Linux kernel. Furthermore, since Carver nodes have no disk, the "root" file system (including /tmp) is kept in memory ("ramdisk"). The kernel and root file system combined occupy about 4GB of memory. Therefore users
LDRD final report : autotuning for scalable linear algebra.
Heroux, Michael Allen; Marker, Bryan
2011-09-01
This report summarizes the progress made as part of a one year lab-directed research and development (LDRD) project to fund the research efforts of Bryan Marker at the University of Texas at Austin. The goal of the project was to develop new techniques for automatically tuning the performance of dense linear algebra kernels. These kernels often represent the majority of computational time in an application. The primary outcome from this work is a demonstration of the value of model driven engineering as an approach to accurately predict and study performance trade-offs for dense linear algebra computations.
U.S. Department of Energy (DOE) all webpages (Extended Search)
miniFE miniFE Description FE is a Finite Element mini-application which implements a couple of kernels representative of implicit finite-element applications. It assembles a sparse linear-system from the steady-state conduction equation on a brick-shaped problem domain of linear 8-node hex elements. It then solves the linear-system using a simple un-preconditioned conjugate-gradient algorithm. Thus the kernels that it contains are: computation of element-operators (diffusion matrix, source
U.S. Department of Energy (DOE) all webpages (Extended Search)
16-000 Implementation and Preliminary Verification of the Quasi-1D Kernel for Intra-pin Resonance Physics in MPACT Yuxuan Liu and William Martin University of Michigan March 16, 2015 CASL-U-2015-0116-000 Implementation and Preliminary Verification of the Quasi-1D Kernel for Intra-pin Resonance Physics in MPACT Yuxuan Liu and William Martin University of Michigan L3:RTM.SUP.P10.03 Milestone Report March 16, 2015 CASL-U-2015-0116-000 2 EXECUTIVE SUMMARY A new resonance self-shielding method ESSM-X
Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems
Choudhary, Alok; Samatova, Nagiza; Wu, Kesheng; Liao, Wei-keng
2015-03-19
This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.
Lanczos Image Resampling Benchmark
Energy Science and Technology Software Center (OSTI)
2007-09-30
This software abstracts a simple computational kernel from SWarp, an astrometric image resampling code. The input is a grayscale PGM image file (8-bit or 16-bit integer) and the output is a higher-resolution grayscale image file (8-bit or 16-bit integer, or 32-bit floating point). The user selects a scaling factor to be applied and a convolution kernel type to be used during resampling (using 1, 16, 36, 64 input pixels to generate each output pixel). Themore » resampling is performed using the OpenGL API and can run on a PC with GPU (graphics processing unit) hardware.« less
THMC Modeling of EGS Reservoirs Continuum through Discontinuum
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Representations: Capturing Reservoir Stimulation, Evolution and Induced Seismicity | Department of Energy THMC Modeling of EGS Reservoirs Continuum through Discontinuum Representations: Capturing Reservoir Stimulation, Evolution and Induced Seismicity THMC Modeling of EGS Reservoirs Continuum through Discontinuum Representations: Capturing Reservoir Stimulation, Evolution and Induced Seismicity This research will develop a thorough understanding of complex THMC interactions through
Quaas, Johannes; Ming, Yi; Menon, Surabi; Takemura, Toshihiko; Wang, Minghuai; Penner, Joyce E.; Gettelman, Andrew; Lohmann, Ulrike; Bellouin, Nicolas; Boucher, Olivier; Sayer, Andrew M.; Thomas, Gareth E.; McComiskey, Allison; Feingold, Graham; Hoose, Corinna; Kristjansson, Jon Egill; Liu, Xiaohong; Balkanski, Yves; Donner, Leo J.; Ginoux, Paul A.; Stier, Philip; Feichter, Johann; Sednev, Igor; Bauer, Susanne E.; Koch, Dorothy; Grainger, Roy G.; Kirkevag, Alf; Iversen, Trond; Seland, Oyvind; Easter, Richard; Ghan, Steven J.; Rasch, Philip J.; Morrison, Hugh; Lamarque, Jean-Francois; Iacono, Michael J.; Kinne, Stefan; Schulz, Michael
2009-04-10
Aerosol indirect effects continue to constitute one of the most important uncertainties for anthropogenic climate perturbations. Within the international AEROCOM initiative, the representation of aerosol-cloud-radiation interactions in ten different general circulation models (GCMs) is evaluated using three satellite datasets. The focus is on stratiform liquid water clouds since most GCMs do not include ice nucleation effects, and none of the model explicitly parameterizes aerosol effects on convective clouds. We compute statistical relationships between aerosol optical depth (Ta) and various cloud and radiation quantities in a manner that is consistent between the models and the satellite data. It is found that the model-simulated influence of aerosols on cloud droplet number concentration (Nd) compares relatively well to the satellite data at least over the ocean. The relationship between Ta and liquid water path is simulated much too strongly by the models. It is shown that this is partly related to the representation of the second aerosol indirect effect in terms of autoconversion. A positive relationship between total cloud fraction (fcld) and Ta as found in the satellite data is simulated by the majority of the models, albeit less strongly than that in the satellite data in most of them. In a discussion of the hypotheses proposed in the literature to explain the satellite-derived strong fcld - Ta relationship, our results indicate that none can be identified as unique explanation. Relationships similar to the ones found in satellite data between Ta and cloud top temperature or outgoing long-wave radiation (OLR) are simulated by only a few GCMs. The GCMs that simulate a negative OLR - Ta relationship show a strong positive correlation between Ta and fcld The short-wave total aerosol radiative forcing as simulated by the GCMs is strongly influenced by the simulated anthropogenic fraction of Ta, and parameterisation assumptions such as a lower bound on Nd
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexandre E. (Chappaqua, NY); Gschwind, Michael K. (Chappaqua, NY); Gunnels, John A. (Yorktown Heights, NY)
2012-08-28
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A
2013-10-29
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
A novel concept of QUADRISO particles Part III : applications to the plutonium-thorium fuel cycle.
Talamo, A.
2009-03-01
In the present study, a plutonium-thorium fuel cycle is investigated including the {sup 233}U production and utilization. A prismatic thermal High Temperature Gas Reactor (HTGR) and the novel concept of quadruple isotropic (QUADRISO) coated particles, designed at the Argonne National Laboratory, have been used for the study. In absorbing QUADRISO particles, a burnable poison layer surrounds the central fuel kernel to flatten the reactivity curve as a function of time. At the beginning of life, the fuel in the QUADRISO particles is hidden from neutrons, since they get absorbed in the burnable poison before they reach the fuel kernel. Only when the burnable poison depletes, neutrons start streaming into the fuel kernel inducing fission reactions and compensating the fuel depletion of ordinary TRISO particles. In fertile QUADRISO particles, the absorber layer is replaced by natural thorium with the purpose of flattening the excess of reactivity by the thorium resonances and producing {sup 233}U. The above configuration has been compared with a configuration where fissile (neptunium-plutonium oxide from Light Water Reactors irradiated fuel) and fertile (natural thorium oxide) fuels are homogeneously mixed in the kernel of ordinary TRISO particles. For the {sup 233}U utilization, the core has been equipped with europium oxide absorbing QUADRISO particles.
2015-10-19
The Kokkos library implements thread-parallel execution policies and shared-memory multidimensional array data structures that enable applications and domain libraries to develop computational kernels that are performance portable across multicore-CPU and manycore-accelerator (e.g. GPU) computing architectures.
Energy Science and Technology Software Center (OSTI)
1994-01-01
STRIDE is a suite of benchmarks, 8 in total, which access a computers low level caches and memory in a variety of patterns. The resulting performance of the kernels provides significant insight into the performance that a real application may incur and dictate how certain algorithmic choices should be made.
Modeling and Analysis of FCM UN TRISO Fuel Using the PARFUME Code
Blaise Collin
2013-09-01
The PARFUME (PARticle Fuel ModEl) modeling code was used to assess the overall fuel performance of uranium nitride (UN) tri-structural isotropic (TRISO) ceramic fuel in the frame of the design and development of Fully Ceramic Matrix (FCM) fuel. A specific modeling of a TRISO particle with UN kernel was developed with PARFUME, and its behavior was assessed in irradiation conditions typical of a Light Water Reactor (LWR). The calculations were used to access the dimensional changes of the fuel particle layers and kernel, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the pyrolytic carbon (PyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties are unknown at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, more effort is needed to determine them and positively conclude on the applicability of FCM fuel to LWRs.
The Fokker-Planck limit of a family of transport differencing methods
Anistratov, D.Y.
1998-12-31
Recently, Pomraning performed an asymptotic analysis of the Fokker-Planck (FP) limit for the analytic transport equation with a forward-peaked scattering kernel. Then, Adams and Pautz extended this analysis to the discrete ordinates transport equation and studied some difference schemes. In this paper a broad family of transport differencing methods is analyzed.
O'Brien, Travis A.; Kashinath, Karthik
2015-05-22
This software implements the fast, self-consistent probability density estimation described by O'Brien et al. (2014, doi: ). It uses a non-uniform fast Fourier transform technique to reduce the computational cost of an objective and self-consistent kernel density estimation method.
Pyrolytic carbon-coated nuclear fuel
Lindemer, Terrence B.; Long, Jr., Ernest L.; Beatty, Ronald L.
1978-01-01
An improved nuclear fuel kernel having at least one pyrolytic carbon coating and a silicon carbon layer is provided in which extensive interaction of fission product lanthanides with the silicon carbon layer is avoided by providing sufficient UO.sub.2 to maintain the lanthanides as oxides during in-reactor use of said fuel.
U-193: NetBSD System Call Return Value Validation Flaw Lets Local Users Gain Elevated Privileges
On Intel CPUs, the sysret instruction can be manipulated into returning to specific non-canonical addresses, which may yield a CPU reset. We cannot currently rule out with utter confidence that this vulnerability could not also be used to execute code with kernel privilege instead of crashing the system.
T-552: Cisco Nexus 1000V VEM updates address denial of service in VMware ESX/ESXi
The Cisco Nexus 1000V Virtual Ethernet Module (VEM) is a virtual switch for ESX and ESXi. This switch can be added to ESX and ESXi where it replaces the VMware virtual switch and runs as part of the ESX and ESXi kernel. A flaw in the handling of dropped packets by Cisco Nexus 1000V VEM can cause ESX and ESXi to crash.
Semi-classical properties of Berezin–Toeplitz operators with C{sup k}-symbol
Barron, Tatyana Pinsonnault, Martin; Ma, Xiaonan; Marinescu, George
2014-04-15
We obtain the semi-classical expansion of the kernels and traces of Toeplitz operators with C{sup k}-symbol on a symplectic manifold. We also give a semi-classical estimate of the distance of a Toeplitz operator to the space of self-adjoint and multiplication operators.
Mutant maize variety containing the glt1-1 allele
Nelson, Oliver E.; Pan, David
1994-01-01
A maize plant has in its genome a non-mutable form of a mutant allele designated vitX-8132. The allele is located at a locus designated as glt which conditions kernels having an altered starch characteristic. Maize plants including such a mutant allele produce a starch that does not increase in viscosity on cooling, after heating.
Mutant maize variety containing the glt1-1 allele
Nelson, O.E.; Pan, D.
1994-07-19
A maize plant has in its genome a non-mutable form of a mutant allele designated vitX-8132. The allele is located at a locus designated as glt which conditions kernels having an altered starch characteristic. Maize plants including such a mutant allele produce a starch that does not increase in viscosity on cooling, after heating. 2 figs.
X-ray fluorescence microtomography of SiC shells
Ice, G.E.; Chung, J.S.; Nagedolfeizi, M.
1997-04-01
TRISCO coated fuel particles contain a small kernel of nuclear fuel encapsulated by alternating layers of C and SiC. The TRISCO coated fuel particle is used in an advanced fuel designed for passive containment of the radioactive isotopes. The SiC layer provides the primary barrier for radioactive elements in the kernel. The effectiveness of this barrier layer under adverse conditions is critical to containment. The authors have begun the study of SiC shells from TRISCO fuel. They are using the fluorescent microprobe beamline 10.3.1. The shells under evaluation include some which have been cycled through a simulated core melt-down. The C buffer layers and nuclear kernels of the coated fuel have been removed by laser drilling through the SiC and then exposing the particle to acid. Elements of interest include Ru, Sb, Cs, Ce and Eu. The radial distribution of these elements in the SiC shells can be attributed to diffusion of elements in the kernel during the melt-down. Other elements in the shells originate during the fabrication of the TRISCO particles.
Modeling and Analysis of UN TRISO Fuel for LWR Application Using the PARFUME Code
Blaise Collin
2014-08-01
The Idaho National Laboraroty (INL) PARFUME (particle fuel model) code was used to assess the overall fuel performance of uranium nitride (UN) tristructural isotropic (TRISO) ceramic fuel under irradiation conditions typical of a Light Water Reactor (LWR). The dimensional changes of the fuel particle layers and kernel were calculated, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn up. These material properties have large uncertainties at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, a large experimental effort would be needed to establish material properties, including kernel and PyC swelling rates, under these conditions before definitive conclusions can be drawn on the behavior of UN TRISO fuel in LWRs.
Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications
Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan
2012-05-15
Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed. We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no
Hoyle, Christopher R.; Webster, Clare S.; Rieder, Harald E.; Nenes, Athanasios; Hammer, Emanuel; Herrmann, Erik; Gysel, Martin; Bukowiecki, Nicolas; Weingartner, Ernest; Steinbacher, Martin; et al
2016-03-29
In this study, a simple statistical model to predict the number of aerosols which activate to form cloud droplets in warm clouds has been established, based on regression analysis of data from four summertime Cloud and Aerosol Characterisation Experiments (CLACE) at the high-altitude site Jungfraujoch (JFJ). It is shown that 79 % of the observed variance in droplet numbers can be represented by a model accounting only for the number of potential cloud condensation nuclei (defined as number of particles larger than 80 nm in diameter), while the mean errors in the model representation may be reduced by the additionmore » of further explanatory variables, such as the mixing ratios of O3, CO, and the height of the measurements above cloud base. The statistical model has a similar ability to represent the observed droplet numbers in each of the individual years, as well as for the two predominant local wind directions at the JFJ (northwest and southeast). Given the central European location of the JFJ, with air masses in summer being representative of the free troposphere with regular boundary layer in-mixing via convection, we expect that this statistical model is generally applicable to warm clouds under conditions where droplet formation is aerosol limited (i.e. at relatively high updraught velocities and/or relatively low aerosol number concentrations). Finally, a comparison between the statistical model and an established microphysical parametrization shows good agreement between the two and supports the conclusion that cloud droplet formation at the JFJ is predominantly controlled by the number concentration of aerosol particles.« less
Roy Chowdhury, Taniya; Herndon, Elizabeth M.; Phelps, Tommy J.; Elias, Dwayne A.; Gu, Baohua; Liang, Liyuan; Wullschleger, Stan D.; Graham, David E.
2014-11-26
Arctic permafrost ecosystems store ~50% of global belowground carbon (C) that is vulnerable to increased microbial degradation with warmer active layer temperatures and thawing of the near surface permafrost. We used anoxic laboratory incubations to estimate anaerobic CO2 production and methanogenesis in active layer (organic and mineral soil horizons) and permafrost samples from center, ridge and trough positions of water-saturated low-centered polygon in Barrow Environmental Observatory, Barrow AK, USA. Methane (CH4) and CO2 production rates and concentrations were determined at 2, +4, or +8 C for 60 day incubation period. Temporal dynamics of CO2 production and methanogenesis at 2 Cmore » showed evidence of fundamentally different mechanisms of substrate limitation and inhibited microbial growth at soil water freezing points compared to warmer temperatures. Nonlinear regression better modeled the initial rates and estimates of Q10 values for CO2 that showed higher sensitivity in the organic-rich soils of polygon center and trough than the relatively drier ridge soils. Methanogenesis generally exhibited a lag phase in the mineral soils that was significantly longer at 2 C in all horizons. Such discontinuity in CH4 production between 2 C and the elevated temperatures (+4 and +8 C) indicated the insufficient representation of methanogenesis on the basis of Q10 values estimated from both linear and nonlinear models. Production rates for both CH4 and CO2 were substantially higher in organic horizons (20% to 40% wt. C) at all temperatures relative to mineral horizons (<20% wt. C). Permafrost horizon (~12% wt. C) produced ~5-fold less CO2 than the active layer and negligible CH4. High concentrations of initial exchangeable Fe(II) and increasing accumulation rates signified the role of iron as terminal electron acceptors for anaerobic C degradation in the mineral horizons.« less
Roy Chowdhury, Taniya; Herndon, Elizabeth M; Phelps, Tommy Joe; Elias, Dwayne A; Gu, Baohua; Liang, Liyuan; Wullschleger, Stan D; Graham, David E
2015-01-01
Arctic permafrost ecosystems store ~50% of global belowground carbon (C) that is vulnerable to increased microbial degradation with warmer active layer temperatures and thawing of the near surface permafrost. We used anoxic laboratory incubations to estimate anaerobic CO2 production and methanogenesis in active layer (organic and mineral soil horizons) and permafrost samples from center, ridge and trough positions of water-saturated low-centered polygon in Barrow Environmental Observatory, Barrow AK, USA. Methane (CH4) and CO2 production rates and concentrations were determined at 2, +4, or +8 C for 60 day incubation period. Temporal dynamics of CO2 production and methanogenesis at 2 C showed evidence of fundamentally different mechanisms of substrate limitation and inhibited microbial growth at soil water freezing points compared to warmer temperatures. Nonlinear regression better modeled the initial rates and estimates of Q10 values for CO2 that showed higher sensitivity in the organic-rich soils of polygon center and trough than the relatively drier ridge soils. Methanogenesis generally exhibited a lag phase in the mineral soils that was significantly longer at 2 C in all horizons. Such discontinuity in CH4 production between 2 C and the elevated temperatures (+4 and +8 C) indicated the insufficient representation of methanogenesis on the basis of Q10 values estimated from both linear and nonlinear models. Production rates for both CH4 and CO2 were substantially higher in organic horizons (20% to 40% wt. C) at all temperatures relative to mineral horizons (<20% wt. C). Permafrost horizon (~12% wt. C) produced ~5-fold less CO2 than the active layer and negligible CH4. High concentrations of initial exchangeable Fe(II) and increasing accumulation rates signified the role of iron as terminal electron acceptors for anaerobic C degradation in the mineral horizons.
Predicting fine-scale distributions of peripheral aquatic species in headwater streams
DeRolph, Christopher R.; Nelson, Stacy A. C.; Kwak, Thomas J.; Hain, Ernie F.
2014-12-09
Headwater species and peripheral populations that occupy habitat at the edge of a species range may hold an increased conservation value to managers due to their potential to maximize intraspecies diversity and species' adaptive capabilities in the context of rapid environmental change. The southern Appalachian Mountains are the southern extent of the geographic range of native Salvelinus fontinalis and naturalized Oncorhynchus mykiss and Salmo trutta in eastern North America. In this paper, we predicted distributions of these peripheral, headwater wild trout populations at a fine scale to serve as a planning and management tool for resource managers to maximize resistance and resilience of these populations in the face of anthropogenic stressors. We developed correlative logistic regression models to predict occurrence of brook trout, rainbow trout, and brown trout for every interconfluence stream reach in the study area. A stream network was generated to capture a more consistent representation of headwater streams. Each of the final models had four significant metrics in common: stream order, fragmentation, precipitation, and land cover. Strahler stream order was found to be the most influential variable in two of the three final models and the second most influential variable in the other model. Greater than 70% presence accuracy was achieved for all three models. The underrepresentation of headwater streams in commonly used hydrography datasets is an important consideration that warrants close examination when forecasting headwater species distributions and range estimates. Finally and additionally, it appears that a relative watershed position metric (e.g., stream order) is an important surrogate variable (even when elevation is included) for biotic interactions across the landscape in areas where headwater species distributions are influenced by topographical gradients.
Predicting fine-scale distributions of peripheral aquatic species in headwater streams
DeRolph, Christopher R.; Nelson, Stacy A. C.; Kwak, Thomas J.; Hain, Ernie F.
2014-12-09
Headwater species and peripheral populations that occupy habitat at the edge of a species range may hold an increased conservation value to managers due to their potential to maximize intraspecies diversity and species' adaptive capabilities in the context of rapid environmental change. The southern Appalachian Mountains are the southern extent of the geographic range of native Salvelinus fontinalis and naturalized Oncorhynchus mykiss and Salmo trutta in eastern North America. In this paper, we predicted distributions of these peripheral, headwater wild trout populations at a fine scale to serve as a planning and management tool for resource managers to maximize resistancemore » and resilience of these populations in the face of anthropogenic stressors. We developed correlative logistic regression models to predict occurrence of brook trout, rainbow trout, and brown trout for every interconfluence stream reach in the study area. A stream network was generated to capture a more consistent representation of headwater streams. Each of the final models had four significant metrics in common: stream order, fragmentation, precipitation, and land cover. Strahler stream order was found to be the most influential variable in two of the three final models and the second most influential variable in the other model. Greater than 70% presence accuracy was achieved for all three models. The underrepresentation of headwater streams in commonly used hydrography datasets is an important consideration that warrants close examination when forecasting headwater species distributions and range estimates. Finally and additionally, it appears that a relative watershed position metric (e.g., stream order) is an important surrogate variable (even when elevation is included) for biotic interactions across the landscape in areas where headwater species distributions are influenced by topographical gradients.« less
Roy Chowdhury, Taniya; Herndon, Elizabeth M; Phelps, Tommy Joe; Elias, Dwayne A; Gu, Baohua; Liang, Liyuan; Wullschleger, Stan D; Graham, David E
2015-01-01
Arctic permafrost ecosystems store ~50% of global belowground carbon (C) that is vulnerable to increased microbial degradation with warmer active layer temperatures and thawing of the near surface permafrost. We used anoxic laboratory incubations to estimate anaerobic CO2 production and methanogenesis in active layer (organic and mineral soil horizons) and permafrost samples from center, ridge and trough positions of water-saturated low-centered polygon in Barrow Environmental Observatory, Barrow AK, USA. Methane (CH4) and CO2 production rates and concentrations were determined at 2, +4, or +8 C for 60 day incubation period. Temporal dynamics of CO2 production and methanogenesis at 2 Cmore » showed evidence of fundamentally different mechanisms of substrate limitation and inhibited microbial growth at soil water freezing points compared to warmer temperatures. Nonlinear regression better modeled the initial rates and estimates of Q10 values for CO2 that showed higher sensitivity in the organic-rich soils of polygon center and trough than the relatively drier ridge soils. Methanogenesis generally exhibited a lag phase in the mineral soils that was significantly longer at 2 C in all horizons. Such discontinuity in CH4 production between 2 C and the elevated temperatures (+4 and +8 C) indicated the insufficient representation of methanogenesis on the basis of Q10 values estimated from both linear and nonlinear models. Production rates for both CH4 and CO2 were substantially higher in organic horizons (20% to 40% wt. C) at all temperatures relative to mineral horizons (<20% wt. C). Permafrost horizon (~12% wt. C) produced ~5-fold less CO2 than the active layer and negligible CH4. High concentrations of initial exchangeable Fe(II) and increasing accumulation rates signified the role of iron as terminal electron acceptors for anaerobic C degradation in the mineral horizons.« less
An Experiment on Graph Analysis Methodologies for Scenarios
Brothers, Alan J.; Whitney, Paul D.; Wolf, Katherine E.; Kuchar, Olga A.; Chin, George
2005-09-30
Visual graph representations are increasingly used to represent, display, and explore scenarios and the structure of organizations. The graph representations of scenarios are readily understood, and commercial software is available to create and manage these representations. The purpose of the research presented in this paper is to explore whether these graph representations support quantitative assessments of the underlying scenarios. The underlying structure of the scenarios is the information that is being targeted in the experiment and the extent to which the scenarios are similar in content. An experiment was designed that incorporated both the contents of the scenarios and analysts’ graph representations of the scenarios. The scenarios’ content was represented graphically by analysts, and both the structure and the semantics of the graph representation were attempted to be used to understand the content. The structure information was not found to be discriminating for the content of the scenarios in this experiment; but, the semantic information was discriminating.
Resonant thermonuclear reaction rate
Haubold, H.J.; Mathai, A.M.
1986-08-01
Basic physical principles for the resonant and nonresonant thermonuclear reaction rates are applied to find their standard representations for nuclear astrophysics. Closed-form representations for the resonant reaction rate are derived in terms of Meijer's G-italic-function. Analytic representations of the resonant and nonresonant nuclear reaction rates are compared and the appearance of Meijer's G-italic-function is discussed in physical terms.
1Q CY2001 (PDF), Facility Representative Program Performance Indicators Quarterly Report
The Facility Representive Program Performance Indicators (PIs) Quarterly Report is attached covering the period from January to March 2001. Data for these indicators are gathered by the Field...
MANUSCRIPT PREPARATION TEMPLATE FOR THE 35TH IEEE PHOTOVOLTAIC...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Transmission and distribution simulation platforms commonly used by system planners do not presently have full-featured models for PV systems representation. Interest in ...
Nextreme Thermal Solutions Inc | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
Nextreme Thermal Solutions Inc Jump to: navigation, search Name: Nextreme Thermal Solutions Inc Place: North Carolina Zip: 27709-3981 Product: String representation "Manufactures...
Southside Thermal Services Ltd | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
Services Ltd Jump to: navigation, search Name: Southside Thermal Services Ltd Place: London, Greater London, United Kingdom Zip: SW7 2AZ Product: String representation "Southside...
Malibu Joint Venture | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
Malibu Joint Venture Jump to: navigation, search Name: Malibu Joint Venture Place: Germany Sector: Solar Product: String representation "German utility ... e of next year." is too...
emergency response assets | National Nuclear Security Administration
National Nuclear Security Administration (NNSA)
The FRMAC is an interagency organization with representation... Radiation Emergency Assistance Center Training Site NNSA's Radiation Emergency Assistance Center Training Site ...
Davis Graham Stubbs LLP | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
Graham Stubbs LLP Jump to: navigation, search Name: Davis Graham & Stubbs LLP Place: Denver, Colorado Zip: 80202 Sector: Services Product: String representation "Davis Graham & ......
Auriga Energy | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
search Name: Auriga Energy Place: Bristol, United Kingdom Zip: BS1 5UB Sector: Solar, Vehicles Product: String representation "Auriga Energy i ... of the market." is too...
Extended Formulations in Mixed-integer Convex Programming | Argonne...
U.S. Department of Energy (DOE) all webpages (Extended Search)
reformulations are shown to be effective extended formulations themselves because they encode separability structure. For mixed-integer conic-representable problems, we provide the...
Evince Technology | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
Evince Technology Jump to: navigation, search Name: Evince Technology Place: United Kingdom Sector: Efficiency, Wind energy Product: String representation "Evince has pion ... ing...
Intec Power Holdings Ltd | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
NG6 0GA Sector: Buildings Product: String representation "Intec's "Silent ... control system." is too long. References: Intec Power Holdings Ltd1 This article is a stub. You...
University of Delaware Institute of Energy Conversion | Open...
Open Energy Information (Open El) [EERE & EIA]
Institute of Energy Conversion Jump to: navigation, search Name: University of Delaware Institute of Energy Conversion Place: Delaware Product: String representation "University...
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
... the nucleon helicity-flip and nonflip quark GPDs in K* Lambda production withmore ... Pion cloud and sea quark flavor asymmetry in the impact parameter representation Strikman, ...
Future Energy Assets LP | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
Assets LP Jump to: navigation, search Name: Future Energy Assets LP Place: Austin, Texas Zip: 78701 Product: String representation "Future Energy A ... S and in China." is too...
Indiana Office of Energy Defense Development | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
Energy Defense Development Jump to: navigation, search Name: Indiana Office of Energy & Defense Development Place: Indianapolis, Indiana Zip: 46204 Product: String representation...
Los Angeles Mayors Office | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
Mayors Office Jump to: navigation, search Name: Los Angeles Mayors Office Place: Los Angeles, California Zip: 90012-3239 Product: String representation "The Clean Tech ... LEED...