Adaptive wiener image restoration kernel
Yuan, Ding
2007-06-05
A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Catamount n-Way LightWeight KerneL 1 R&D 100 Entry Catamount n-Way LightWeight KerneL 2 R&D 100 Entry Submitting organization Sandia National Laboratories PO Box 5800 Albuquerque, NM 87185-1319 USA Ron Brightwell Phone: (505) 844-2099 Fax: (505) 845-7442 rbbrigh@sandia.gov AFFIRMATION: I affirm that all information submitted as a part of, or supplemental to, this entry is a fair and accurate representation of this product. _____________________________ Ron Brightwell Joint entry
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Kernel Library (MKL) Math Kernel Library (MKL) Description The Intel Math Kernel Library (Intel MKL) contains highly optimized, extensively threaded math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, Sparse Solvers, Fast Fourier Transforms, Vector Math, and more. MKL is available on NERSC computer platforms where the Intel compilers are available. For instances, on Cori and Edison. If you use intel compilers to compile your
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
Robotic Intelligence Kernel: Communications
Energy Science and Technology Software Center (OSTI)
2009-09-16
The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.
Robotic Intelligence Kernel: Driver
Energy Science and Technology Software Center (OSTI)
2009-09-16
The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.
Robotic Intelligence Kernel: Architecture
Energy Science and Technology Software Center (OSTI)
2009-09-16
The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.
Robotic Intelligence Kernel: Visualization
Energy Science and Technology Software Center (OSTI)
2009-09-16
The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.
Bruemmer, David J.
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Linux Kernel Error Detection and Correction
Energy Science and Technology Software Center (OSTI)
2007-04-11
EDAC-utils consists fo a library and set of utilities for retrieving statistics from the Linux Kernel Error Detection and Correction (EDAC) drivers.
V-098: Linux Kernel Extended Verification Module Bug Lets Local...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
reported in the Linux Kernel. REFERENCE LINKS: The Linux Kernel Archives Linux Kernel Red Hat Bugzilla - Bug 913266 SecurityTracker Alert ID: 1028196 CVE-2013-0313 IMPACT...
Tharrington, Arnold N.
2015-09-09
The NCCS Regression Test Harness is a software package that provides a framework to perform regression and acceptance testing on NCCS High Performance Computers. The package is written in Python and has only the dependency of a Subversion repository to store the regression tests.
Time Adaptive Conditional Kernel Density Estimation for Wind...
Office of Scientific and Technical Information (OSTI)
Time Adaptive Conditional Kernel Density Estimation for Wind Power Forecasting Citation Details In-Document Search Title: Time Adaptive Conditional Kernel Density Estimation for ...
Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications...
Office of Scientific and Technical Information (OSTI)
Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications Citation Details In-Document Search Title: Linux Kernel Co-Scheduling For Bulk Synchronous Parallel ...
KITTEN Lightweight Kernel 0.1 Beta
Energy Science and Technology Software Center (OSTI)
2007-12-12
The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten providesmore » unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.« less
TICK: Transparent Incremental Checkpointing at Kernel Level
Energy Science and Technology Software Center (OSTI)
2004-10-25
TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.
Fast generation of sparse random kernel graphs
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Fabrication of Uranium Oxycarbide Kernels for HTR Fuel
Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber
2010-10-01
Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.
Wilson Dslash Kernel From Lattice QCD Optimization
Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.
PERI Auto-tuning Memory Intensive Kernels
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
PERI - Auto-tuning Memory Intensive Kernels for Multicore Samuel Williams † , Kaushik Datta † , Jonathan Carter , Leonid Oliker † , John Shalf , Katherine Yelick † , David Bailey CRD/NERSC, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA † Computer Science Division, University of California at Berkeley, Berkeley, CA 94720, USA E-mail: SWWilliams@lbl.gov, kdatta@eecs.berkeley.edu, JTCarter@lbl.gov, LOliker@lbl.gov, JShalf@lbl.gov, KAYelick@lbl.gov, DHBailey@lbl.gov
PySKI: THE PYTHON SPARSE KERNEL INTERFACE
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
This content is based on slides produced by Tom Deakin and Simon which were based on slides by Tim and Simon with help from Ben Gaster (Qualcomm) . Agenda Lectures Exercises An Introduction to OpenCL Logging in and running the Vadd program Understanding Host programs Chaining Vadd kernels together Kernel programs The D = A + B + C problem Writing Kernel Programs Matrix Multiplication Lunch Working with the OpenCL memory model Several ways to Optimize matrix multiplication High Performance OpenCL
V-169: Linux Kernel "iscsi_add_notunderstood_response()" Buffer...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
has been reported in Linux Kernel. REFERENCE LINKS: Secunia Advisory SA53670 Red Hat Bugzilla - Bug 968036 CVE-2013-2850 IMPACT ASSESSMENT: Medium DISCUSSION: The...
Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism ...
Office of Scientific and Technical Information (OSTI)
Sponsoring Org: SC USDOE - Office of Science (SC) Country of Publication: United States Language: English Subject: operating system noise; operating system interference; kernel ...
U-175: Linux Kernel KVM Memory Slot Management Flaw
Broader source: Energy.gov [DOE]
A vulnerability was reported in the Linux Kernel. A local user on the guest operating system can cause denial of service conditions on the host operating system.
U-086:Linux Kernel "/proc//mem" Privilege Escalation Vulnerability
Broader source: Energy.gov [DOE]
A vulnerability has been discovered in the Linux Kernel, which can be exploited by malicious, local users to gain escalated privileges.
Transportation Representation | NISAC
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
NISACTransportation Representation content top Chemical Supply Chain Analysis Posted by Admin on Mar 1, 2012 in | Comments 0 comments Chemical Supply Chain Analysis NISAC has...
U-242: Linux Kernel Netlink SCM_CREDENTIALS Processing Flaw Lets...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
2: Linux Kernel Netlink SCMCREDENTIALS Processing Flaw Lets Local Users Gain Elevated Privileges U-242: Linux Kernel Netlink SCMCREDENTIALS Processing Flaw Lets Local Users Gain...
V-156: Linux Kernel Array Bounds Checking Flaw Lets Local Users...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
6: Linux Kernel Array Bounds Checking Flaw Lets Local Users Gain Elevated Privileges V-156: Linux Kernel Array Bounds Checking Flaw Lets Local Users Gain Elevated Privileges May...
On flame kernel formation and propagation in premixed gases
Eisazadeh-Far, Kian; Metghalchi, Hameed [Northeastern University, Mechanical and Industrial Engineering Department, Boston, MA 02115 (United States); Parsinejad, Farzan [Chevron Oronite Company LLC, Richmond, CA 94801 (United States); Keck, James C. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)
2010-12-15
Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)
Prediction of spark kernel development in constant volume combustion
Lim, M.T.; Anderson, R.W.; Arpaci, V.S.
1987-09-01
Combustion initiation is studied in atmospheric pressure propane-air mixtures in a constant volume bomb with a high speed (10,000 fps) laser schlieren system. The spark current and voltage waveforms are simultaneously recorded for later model input. A phenomenological model for early flame kernel development is presented which accounts for the initial, breakdown generated, spark kernel and its subsequent growth. The kernel growth is initially controlled by the breakdown process and the subsequent electrical power input. A new, spark power induced, mass entrainment term is shown to model this initially rapid volume increase adequately while later growth is mainly dominated by diffusion. Results and model comparisons are presented for the effects of power input, spark energy, and equivalence ratio.
T-583: Linux Kernel OSF Partition Table Buffer Overflow Lets Local Users Obtain Information
Broader source: Energy.gov [DOE]
A local user can create a storage device with specially crafted OSF partition tables. When the kernel automatically evaluates the partition tables, a buffer overflow may occur and data from kernel heap space may leak to user-space.
U-226: Linux Kernel SFC Driver TCP MSS Option Handling Denial...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
6: Linux Kernel SFC Driver TCP MSS Option Handling Denial of Service Vulnerability U-226: Linux Kernel SFC Driver TCP MSS Option Handling Denial of Service Vulnerability August 2,...
U-056: Linux Kernel HFS Buffer Overflow Lets Local Users Gain...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
6: Linux Kernel HFS Buffer Overflow Lets Local Users Gain Root Privileges U-056: Linux Kernel HFS Buffer Overflow Lets Local Users Gain Root Privileges December 9, 2011 - 8:00am...
U-210: Linux Kernel epoll_ctl() Bug Lets Local Users Deny Service
Broader source: Energy.gov [DOE]
A vulnerability was reported in the Linux Kernel. A local user can cause denial of service conditions.
T-571: Linux Kernel dns_resolver Key Processing Error Lets Local Users Deny Services
Broader source: Energy.gov [DOE]
A vulnerability was reported in the Linux Kernel. A local user can cause denial of service conditions.
PERI - Auto-tuning Memory Intensive Kernels for Multicore
Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H
2008-06-24
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.
Results from ORNL Characterization of Nominal 350 ?m NUCO Kernels from the BWXT 59344 batch
Hunn, John D; Kercher, Andrew K; Menchhofer, Paul A; Price, Jeffery R
2005-01-01
This document is a compilation of characterization data obtained on nominal 350 {micro}m natural enrichment uranium oxide/uranium carbide kernels (NUCO) produced by BWXT for the Advanced Gas Reactor Fuel Development and Qualification Program. These kernels were produced as part of a development effort at BWXT to address issues involving forming and heat treatment and were shipped to ORNL for additional characterization and for coating tests. The kernels were identified as G73N-NU-59344. 250 grams were shipped to ORNL. Size, shape, and microstructural analysis was performed. These kernels were preceded by G73B-NU-69300 and G73B-NU-69301, which were kernels produced and delivered to ORNL earlier in the development phase. Characterization of the kernels from G73B-NU-69300 was summarized in ORNL/CF-04/07 'Results from ORNL Characterization of Nominal 350 {micro}m NUCO Kernels from the BWXT 69300 composite'.
U-080: Linux Kernel XFS Heap Overflow May Let Remote Users Execute Arbitrary Code
Broader source: Energy.gov [DOE]
A vulnerability was reported in the Linux Kernel. A remote user can cause arbitrary code to be executed on the target user's system.
T-653: Linux Kernel sigqueueinfo() Process Lets Local Users Send Spoofed Signals
Broader source: Energy.gov [DOE]
A vulnerability was reported in the Linux Kernel. A local user can send spoofed signals to other processes in certain cases.
TORCH Computational Reference Kernels - A Testbed for Computer Science Research
Kaiser, Alex; Williams, Samuel Webb; Madduri, Kamesh; Ibrahim, Khaled; Bailey, David H.; Demmel, James W.; Strohmaier, Erich
2010-12-02
For decades, computer scientists have sought guidance on how to evolve architectures, languages, and programming models in order to improve application performance, efficiency, and productivity. Unfortunately, without overarching advice about future directions in these areas, individual guidance is inferred from the existing software/hardware ecosystem, and each discipline often conducts their research independently assuming all other technologies remain fixed. In today's rapidly evolving world of on-chip parallelism, isolated and iterative improvements to performance may miss superior solutions in the same way gradient descent optimization techniques may get stuck in local minima. To combat this, we present TORCH: A Testbed for Optimization ResearCH. These computational reference kernels define the core problems of interest in scientific computing without mandating a specific language, algorithm, programming model, or implementation. To compliment the kernel (problem) definitions, we provide a set of algorithmically-expressed verification tests that can be used to verify a hardware/software co-designed solution produces an acceptable answer. Finally, to provide some illumination as to how researchers have implemented solutions to these problems in the past, we provide a set of reference implementations in C and MATLAB.
T-601: Windows Kernel win32k.sys Lets Local Users Gain Elevated Privileges
Broader source: Energy.gov [DOE]
Multiple vulnerabilities were reported in the Windows Kernel. A local user can obtain elevated privileges on the target system. A local user can trigger a use-after free or null pointer dereference to execute arbitrary commands on the target system with kernel level privileges.
FABRICATION PROCESS AND PRODUCT QUALITY IMPROVEMENTS IN ADVANCED GAS REACTOR UCO KERNELS
Charles M Barnes
2008-09-01
A major element of the Advanced Gas Reactor (AGR) program is developing fuel fabrication processes to produce high quality uranium-containing kernels, TRISO-coated particles and fuel compacts needed for planned irradiation tests. The goals of the AGR program also include developing the fabrication technology to mass produce this fuel at low cost. Kernels for the first AGR test (“AGR-1) consisted of uranium oxycarbide (UCO) microspheres that werre produced by an internal gelation process followed by high temperature steps tot convert the UO3 + C “green” microspheres to first UO2 + C and then UO2 + UCx. The high temperature steps also densified the kernels. Babcock and Wilcox (B&W) fabricated UCO kernels for the AGR-1 irradiation experiment, which went into the Advance Test Reactor (ATR) at Idaho National Laboratory in December 2006. An evaluation of the kernel process following AGR-1 kernel production led to several recommendations to improve the fabrication process. These recommendations included testing alternative methods of dispersing carbon during broth preparation, evaluating the method of broth mixing, optimizing the broth chemistry, optimizing sintering conditions, and demonstrating fabrication of larger diameter UCO kernels needed for the second AGR irradiation test. Based on these recommendations and requirements, a test program was defined and performed. Certain portions of the test program were performed by Oak Ridge National Laboratory (ORNL), while tests at larger scale were performed by B&W. The tests at B&W have demonstrated improvements in both kernel properties and process operation. Changes in the form of carbon black used and the method of mixing the carbon prior to forming kernels led to improvements in the phase distribution in the sintered kernels, greater consistency in kernel properties, a reduction in forming run time, and simplifications to the forming process. Process parameter variation tests in both forming and sintering steps led
FABRICATION OF URANIUM OXYCARBIDE KERNELS AND COMPACTS FOR HTR FUEL
Dr. Jeffrey A. Phillips; Eric L. Shaber; Scott G. Nagley
2012-10-01
As part of the program to demonstrate tristructural isotropic (TRISO)-coated fuel for the Next Generation Nuclear Plant (NGNP), Advanced Gas Reactor (AGR) fuel is being irradiation tested in the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL). This testing has led to improved kernel fabrication techniques, the formation of TRISO fuel particles, and upgrades to the overcoating, compaction, and heat treatment processes. Combined, these improvements provide a fuel manufacturing process that meets the stringent requirements associated with testing in the AGR experimentation program. Researchers at Idaho National Laboratory (INL) are working in conjunction with a team from Babcock and Wilcox (B&W) and Oak Ridge National Laboratory (ORNL) to (a) improve the quality of uranium oxycarbide (UCO) fuel kernels, (b) deposit TRISO layers to produce a fuel that meets or exceeds the standard developed by German researches in the 1980s, and (c) develop a process to overcoat TRISO particles with the same matrix material, but applies it with water using equipment previously and successfully employed in the pharmaceutical industry. A primary goal of this work is to simplify the process, making it more robust and repeatable while relying less on operator technique than prior overcoating efforts. A secondary goal is to improve first-pass yields to greater than 95% through the use of established technology and equipment. In the first test, called “AGR-1,” graphite compacts containing approximately 300,000 coated particles were irradiated from December 2006 to November 2009. The AGR-1 fuel was designed to closely replicate many of the properties of German TRISO-coated particles, thought to be important for good fuel performance. No release of gaseous fission product, indicative of particle coating failure, was detected in the nearly 3-year irradiation to a peak burn up of 19.6% at a time-average temperature of 1038–1121°C. Before fabricating AGR-2 fuel, each
AGR-5/6/7 LEUCO Kernel Fabrication Readiness Review
Marshall, Douglas W.; Bailey, Kirk W.
2015-02-01
In preparation for forming low-enriched uranium carbide/oxide (LEUCO) fuel kernels for the Advanced Gas Reactor (AGR) fuel development and qualification program, Idaho National Laboratory conducted an operational readiness review of the Babcock & Wilcox Nuclear Operations Group – Lynchburg (B&W NOG-L) procedures, processes, and equipment from January 14 – January 16, 2015. The readiness review focused on requirements taken from the American Society Mechanical Engineers (ASME) Nuclear Quality Assurance Standard (NQA-1-2008, 1a-2009), a recent occurrence at the B&W NOG-L facility related to preparation of acid-deficient uranyl nitrate solution (ADUN), and a relook at concerns noted in a previous review. Topic areas open for the review were communicated to B&W NOG-L in advance of the on-site visit to facilitate the collection of objective evidences attesting to the state of readiness.
libMSR library and msr-safe kernel module
Energy Science and Technology Software Center (OSTI)
2013-09-26
Modern processors offer a wide range of control and measurement features. While those are traditionally accessed through libraries like PAPI, some newer features no longer follow the traditional model of counters that can be used to only read the state of the processor. For example, Precise Event Based Sampling (PEBS) can generate records that requires a kernel memory for storage. Additionally, new features like power capping and thermal control require similar new access methods. Allmore » of these features are ultimately controlled through Model Specific Registers (MSRs). We therefore need new mechanisms to make such features available to tools and ultimately to the user. libMSR provides a convenient interface to access MSRs and to allow tools to utilize their full functionality.« less
Dynamic extension of the Simulation Problem Analysis Kernel (SPANK)
Sowell, E.F. . Dept. of Computer Science); Buhl, W.F. )
1988-07-15
The Simulation Problem Analysis Kernel (SPANK) is an object-oriented simulation environment for general simulation purposes. Among its unique features is use of the directed graph as the primary data structure, rather than the matrix. This allows straightforward use of graph algorithms for matching variables and equations, and reducing the problem graph for efficient numerical solution. The original prototype implementation demonstrated the principles for systems of algebraic equations, allowing simulation of steady-state, nonlinear systems (Sowell 1986). This paper describes how the same principles can be extended to include dynamic objects, allowing simulation of general dynamic systems. The theory is developed and an implementation is described. An example is taken from the field of building energy system simulation. 2 refs., 9 figs.
SAR Image Complex Pixel Representations
Doerry, Armin W.
2015-03-01
Complex pixel values for Synthetic Aperture Radar (SAR) images of uniform distributed clutter can be represented as either real/imaginary (also known as I/Q) values, or as Magnitude/Phase values. Generally, these component values are integers with limited number of bits. For clutter energy well below full-scale, Magnitude/Phase offers lower quantization noise than I/Q representation. Further improvement can be had with companding of the Magnitude value.
Temporal Representation in Semantic Graphs
Levandoski, J J; Abdulla, G M
2007-08-07
A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.
PART IV REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
K, Page i PART IV - REPRESENTATIONS AND INSTRUCTIONS SECTION K REPRESENTATIONS, CERTIFICATIONS, AND OTHER STATEMENTS OF OFFERORS K-1 FAR 52.204-8 ANNUAL REPRESENTATIONS AND CERTIFICATIONS (DEC 2014) .................. 131 K-2 FAR 52.204-16 COMMERCIAL AND GOVERNMENT ENTITY CODE REPORTING (JUL 2015) ...................................................................................................................................................................... 135 K-3 FAR 52.209-7 INFORMATION
PART IV REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
K, Page i PART IV - REPRESENTATIONS AND INSTRUCTIONS SECTION K REPRESENTATIONS, CERTIFICATIONS, AND OTHER STATEMENTS OF OFFERORS K-1 FAR 52.204-8 ANNUAL REPRESENTATIONS AND CERTIFICATIONS (DEC 2014) .................. 131 K-2 FAR 52.204-16 COMMERCIAL AND GOVERNMENT ENTITY CODE REPORTING (JUL 2015) ...................................................................................................................................................................... 135 K-3 FAR 52.209-7 INFORMATION
STORM: A STatistical Object Representation Model
Rafanelli, M. ); Shoshani, A. )
1989-11-01
In this paper we explore the structure and semantic properties of the entities stored in statistical databases. We call such entities statistical objects'' (SOs) and propose a new statistical object representation model,'' based on a graph representation. We identify a number of SO representational problems in current models and propose a methodology for their solution. 11 refs.
Representation of Limited Rights Data and Restricted Computer...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Representation of Limited Rights Data and Restricted Computer Software Representation of Limited Rights Data and Restricted Computer Software Representation of Limited Rights Data ...
Code System to Calculate Correlation & Regression Coefficients.
Energy Science and Technology Software Center (OSTI)
1999-11-23
Version 00 PCC/SRC is designed for use in conjunction with sensitivity analyses of complex computer models. PCC/SRC calculates the partial correlation coefficients (PCC) and the standardized regression coefficients (SRC) from the multivariate input to, and output from, a computer model.
SEP Request for Approval Form 3 - Other Complex Regression Model...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
3 - Other Complex Regression Model Rationale SEP Request for Approval Form 3 - Other Complex Regression Model Rationale SEP-Request-for-Approval-Form-3Other-Complex-Regression-Mod...
U-068:Linux Kernel SG_IO ioctl Bug Lets Local Users Gain Elevated...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Linux Kernel SGIO ioctl Bug Lets Local Users Gain Elevated Privileges PLATFORM: Red Hat Enterprise Linux Desktop (v. 6) Red Hat Enterprise Linux HPC Node (v. 6) Red Hat...
Spark ignited turbulent flame kernel growth. Annual report, January--December, 1992
Santavicca, D.A.
1994-06-01
Cyclic combustion variations in spark-ignition engines limit the use of dilute charge strategies for achieving low NO{sub x} emissions and improved fuel economy. Results from an experimental study of the effect of incomplete fuel-air mixing (ifam) on spark-ignited flame kernel growth in turbulent propane-air mixtures are presented. The experiments were conducted in a turbulent flow system that allows for independent variation of flow parameters, ignition system parameters, and the degree of fuel-air mixing. Measurements were made at 1 atm and 300 K conditions. Five cases were studied; a premixed and four incompletely mixed cases with 6%, 13%, 24% and 33% RMS (root-mean-square) fluctuations in the fuel/air equivalence ratio. High speed laser shadowgraphy at 4,000 frames-per-second was used to record flame kernel growth following spark ignition, from which the equivalent flame kernel radius as a function of time was determined. The effect of ifam was evaluated in terms of the flame kernel growth rate, cyclic variations in the flame kernel growth, and the rate of misfire. The results show that fluctuations in local mixture strength due to ifam cause the flame kernel surface to become wrinkled and distorted; and that the amount of wrinkling increases as the degree of ifam. Ifam was also found to result in a significant increase in cyclic variations in the flame kernel growth. The average flame kernel growth rates for the premixed and the incompletely mixed cases were found to be within the experimental uncertainty except for the 33%-RMS-fluctuation case where the growth rate is significantly lower. The premixed and 6%-RMS-fluctuation cases had a 0% misfire rate. The misfire rates were 1% and 2% for the 13%-RMS-fluctuation and 24%-RMS-fluctuation cases, respectively; however, it drastically increased to 23% in the 33%-RMS-fluctuation case.
STELLAR LOCUS REGRESSION: ACCURATE COLOR CALIBRATION AND THE...
Office of Scientific and Technical Information (OSTI)
REGRESSION: ACCURATE COLOR CALIBRATION AND THE REAL-TIME DETERMINATION OF GALAXY CLUSTER PHOTOMETRIC REDSHIFTS Citation Details In-Document Search Title: STELLAR LOCUS REGRESSION: ...
Part II - Managerial Competencies: Organizational Representation...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Part II - Managerial Competencies: Organizational Representation and Liaison Form for the SES program emphasizes the range of communications and publicrelations aspects of ...
PART IV REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
... Computer Software. (d) The offeror has completed the annual representations and certifications electronically via the SAM Web site accessed through https:www.acquisition.gov . ...
Luttman, A.
2012-10-08
This slide-show discusses the use of the Local Polynomial Approximation (LPA) to smooth signals from photonic Doppler velocimetry (PDV) applying a generalized Peano kernel theorem.
Representable states on quasilocal quasi *-algebras
Bagarello, F.; Trapani, C.; Triolo, S.
2011-01-15
Continuing a previous analysis originally motivated by physics, we consider representable states on quasilocal quasi *-algebras, starting with examining the possibility for a compatible family of local states to give rise to a global state. Some properties of local modifications of representable states and some aspects of their asymptotic behavior are also considered.
Lindemer, Terrence; Voit, Stewart L; Silva, Chinthaka M; Besmann, Theodore M; Hunt, Rodney Dale
2014-01-01
The U.S. Department of Energy is considering a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with large, dense uranium nitride (UN) kernels. This effort explores many factors involved in using gel-derived uranium oxide-carbon microspheres to make large UN kernels. Analysis of recent studies with sufficient experimental details is provided. Extensive thermodynamic calculations are used to predict carbon monoxide and other pressures for several different reactions that may be involved in conversion of uranium oxides and carbides to UN. Experimentally, the method for making the gel-derived microspheres is described. These were used in a microbalance with an attached mass spectrometer to determine details of carbothermic conversion in argon, nitrogen, or vacuum. A quantitative model is derived from experiments for vacuum conversion to an uranium oxide-carbide kernel.
Problematic projection to the in-sample subspace for a kernelized anomaly detector
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Theiler, James; Grosklos, Guen
2016-03-07
We examine the properties and performance of kernelized anomaly detectors, with an emphasis on the Mahalanobis-distance-based kernel RX (KRX) algorithm. Although the detector generally performs well for high-bandwidth Gaussian kernels, it exhibits problematic (in some cases, catastrophic) performance for distances that are large compared to the bandwidth. By comparing KRX to two other anomaly detectors, we can trace the problem to a projection in feature space, which arises when a pseudoinverse is used on the covariance matrix in that feature space. Here, we show that a regularized variant of KRX overcomes this difficulty and achieves superior performance over a widemore » range of bandwidths.« less
Predictive based monitoring of nuclear plant component degradation using support vector regression
Agarwal, Vivek; Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.
2015-02-01
Nuclear power plants (NPPs) are large installations comprised of many active and passive assets. Degradation monitoring of all these assets is expensive (labor cost) and highly demanding task. In this paper a framework based on Support Vector Regression (SVR) for online surveillance of critical parameter degradation of NPP components is proposed. In this case, on time replacement or maintenance of components will prevent potential plant malfunctions, and reduce the overall operational cost. In the current work, we apply SVR equipped with a Gaussian kernel function to monitor components. Monitoring includes the one-step-ahead prediction of the component’s respective operational quantity using the SVR model, while the SVR model is trained using a set of previous recorded degradation histories of similar components. Predictive capability of the model is evaluated upon arrival of a sensor measurement, which is compared to the component failure threshold. A maintenance decision is based on a fuzzy inference system that utilizes three parameters: (i) prediction evaluation in the previous steps, (ii) predicted value of the current step, (iii) and difference of current predicted value with components failure thresholds. The proposed framework will be tested on turbine blade degradation data.
PT-symmetric representations of fermionic algebras
Bender, Carl M.; Klevansky, S. P.
2011-08-15
A recent paper by Jones-Smith and Mathur, Phys. Rev. A 82, 042101 (2010) extends PT-symmetric quantum mechanics from bosonic systems (systems for which T{sup 2}=1) to fermionic systems (systems for which T{sup 2}=-1). The current paper shows how the formalism developed by Jones-Smith and Mathur can be used to construct PT-symmetric matrix representations for operator algebras of the form {eta}{sup 2}=0, {eta}{sup 2}=0, {eta}{eta}+{eta}{eta}={alpha}1, where {eta}={eta}{sup PT}=PT{eta}T{sup -1}P{sup -1}. It is easy to construct matrix representations for the Grassmann algebra ({alpha}=0). However, one can only construct matrix representations for the fermionic operator algebra ({alpha}{ne}0) if {alpha}=-1; a matrix representation does not exist for the conventional value {alpha}=1.
Mental Representations Formed From Educational Website Formats
Elizabeth T. Cady; Kimberly R. Raddatz; Tuan Q. Tran; Bernardo de la Garza; Peter D. Elgin
2006-10-01
The increasing popularity of web-based distance education places high demand on distance educators to format web pages to facilitate learning. However, limited guidelines exist regarding appropriate writing styles for web-based distance education. This study investigated the effect of four different writing styles on readers mental representation of hypertext. Participants studied hypertext written in one of four web-writing styles (e.g., concise, scannable, objective, and combined) and were then administered a cued association task intended to measure their mental representations of the hypertext. It is hypothesized that the scannable and combined styles will bias readers to scan rather than elaborately read, which may result in less dense mental representations (as identified through Pathfinder analysis) relative to the objective and concise writing styles. Further, the use of more descriptors in the objective writing style will lead to better integration of ideas and more dense mental representations than the concise writing style.
Part II - Managerial Competencies: Organizational Representation and
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Liaison | Department of Energy Part II - Managerial Competencies: Organizational Representation and Liaison Part II - Managerial Competencies: Organizational Representation and Liaison Form for the SES program emphasizes the range of communications and public relations aspects of executive positions as found in official correspondence and documentation, as well as, formal and informal verbal communications, and it describes the major competencies within this activity Part II - Managerial
Reiner, Dora; Blaickner, Matthias; Rattay, Frank
2009-11-15
Purpose: Radiopharmaceuticals administered in targeted radionuclide therapy (TRT) rely to a great extent not only on beta-emitting nuclides but also on emitters of monoenergetic electrons. Recent advances like combined PET/CT devices, the consequential coregistration of both data, the concept of using beta couples for diagnosis and therapy, respectively, as well as the development of voxel models offer a great potential for developing TRT dose calculation systems similar to those available for external beam treatment planning. The deterministic algorithms in question for this task are based on the convolution of three-dimensional matrices, one representing the activity distribution and the other the dose point kernel. This study aims to report on three-dimensional kernel matrices for various nuclides used in TRT. Methods: The Monte Carlo code MCNP5 was used to calculate discrete dose kernels of beta particles including the contributions from their respective secondary radiation in soft tissue for the following nuclides: {sup 32}P, {sup 33}P, {sup 67}Cu, {sup 89}Sr, {sup 90}Y, {sup 103}Rh{sup m}, {sup 131}I, {sup 177}Lu, {sup 186}Re, and {sup 188}Re. For each nuclide a kernel cube of 10x10x10 mm{sup 3} was calculated, the dimensions of a voxel being 1 mm{sup 3}. Additional kernels with voxel sizes of 3x3x3 mm{sup 3} were simulated. Results: Comparison with the S-value data regarding {sup 32}P, {sup 89}Sr, {sup 90}Y, and {sup 131}I of the MIRD committee which were calculated with the EGS4 code showed a very good agreement, the secondary particle transport of {sup 90}Y being the only exception. Documented analytical kernels on the other side show deviations very close and very far to the source. Conclusions: The good accordance with the only discrete dose kernels published up to date justifies the method chosen. Together with the additional six nuclides, this report provides a considerable database for three-dimensional kernel matrices with regard to beta
Verification and large deformation analysis using the reproducing kernel particle method
Beckwith, Frank
2015-09-01
The reproducing kernel particle method (RKPM) is a meshless method used to solve general boundary value problems using the principle of virtual work. RKPM corrects the kernel approximation by introducing reproducing conditions which force the method to be complete to arbritrary order polynomials selected by the user. Effort in recent years has led to the implementation of RKPM within the Sierra/SM physics software framework. The purpose of this report is to investigate convergence of RKPM for verification and validation purposes as well as to demonstrate the large deformation capability of RKPM in problems where the finite element method is known to experience difficulty. Results from analyses using RKPM are compared against finite element analysis. A host of issues associated with RKPM are identified and a number of potential improvements are discussed for future work.
Azcona, J; Burguete, J
2014-06-01
Purpose: To obtain the pencil beam kernels that characterize a megavoltage photon beam generated in a FFF linac by experimental measurements, and to apply them for dose calculation in modulated fields. Methods: Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from a Varian True Beam (Varian Medical Systems, Palo Alto, CA) linac, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50 mm diameter circular field, collimated with a lead block. Measured dose leads to the kernel characterization, assuming that the energy fluence exiting the linac head and further collimated is originated on a point source. The three-dimensional kernel was obtained by deconvolution at each depth using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. The kernels were used to calculate modulated dose distributions in six modulated fields and compared through the gamma index to their absolute dose measured by film in the RW3 phantom. Results: The resulting kernels properly characterize the global beam penumbra. The output factor-based correction was carried out adding the amount of signal necessary to reproduce the experimental output factor in steps of 2mm, starting at a radius of 4mm. There the kernel signal was in all cases below 10% of its maximum value. With this correction, the number of points that pass the gamma index criteria (3%, 3mm) in the modulated fields for all cases are at least 99.6% of the total number of points. Conclusion: A system for independent dose calculations in modulated fields from FFF beams has been developed. Pencil beam kernels were obtained and their ability to accurately calculate dose in homogeneous media was demonstrated.
Data summary for nominal 500 ?m DUO_{2} Kernels
Hunn, John D
2004-04-01
This document is a compilation of characterization data obtained on the nominal 500 {micro}m DUO{sub 2} kernels produced by ORNL for the Advanced Gas Reactor Fuel Development and Qualification Program to satisfy the FY03 WBS 3.1.2 task milestone No.2.2 kg of kernels were produced and combined in two composite lots. DUN-500 was a 1630 g composite sieved between 500 {+-} 2 {micro}m and 534 {+-} 2 {micro}m ASTM E161 electroformed sieves. DUN-482 was a 385.6 g composite sieved between 482 {+-} 2 {micro}m and 518 {+-} 2 {micro}m ASTM E161 electroformed sieves. Size, shape, density, and microstructural analysis were performed on a 100 g sublot (DUN-500-S-1) riffled from the DUN-500 composite. Size and shape were also measured on a 100 g sublot (DUN-482-S-1) riffled from the DUN-482 composite. For comparison, analysis was also performed on kernels extracted from the German reference fuel EUO 2358-2365 (AGR-06).
Petersen, Jakob; Pollak, Eli
2015-12-14
One of the challenges facing on-the-fly ab initio semiclassical time evolution is the large expense needed to converge the computation. In this paper, we suggest that a significant saving in computational effort may be achieved by employing a semiclassical initial value representation (SCIVR) of the quantum propagator based on the Heisenberg interaction representation. We formulate and test numerically a modification and simplification of the previous semiclassical interaction representation of Shao and Makri [J. Chem. Phys. 113, 3681 (2000)]. The formulation is based on the wavefunction form of the semiclassical propagation instead of the operator form, and so is simpler and cheaper to implement. The semiclassical interaction representation has the advantage that the phase and prefactor vary relatively slowly as compared to the “standard” SCIVR methods. This improves its convergence properties significantly. Using a one-dimensional model system, the approximation is compared with Herman-Kluk’s frozen Gaussian and Heller’s thawed Gaussian approximations. The convergence properties of the interaction representation approach are shown to be favorable and indicate that the interaction representation is a viable way of incorporating on-the-fly force field information within a semiclassical framework.
Representations of some quantum tori Lie subalgebras
Jiang, Jingjing; Wang, Song
2013-03-15
In this paper, we define the q-analog Virasoro-like Lie subalgebras in x{sub {infinity}}=a{sub {infinity}}(b{sub {infinity}}, c{sub {infinity}}, d{sub {infinity}}). The embedding formulas into x{sub {infinity}} are introduced. Irreducible highest weight representations of A(tilde sign){sub q}, B(tilde sign){sub q}, and C(tilde sign){sub q}-series of the q-analog Virasoro-like Lie algebras in terms of vertex operators are constructed. We also construct the polynomial representations of the A(tilde sign){sub q}, B(tilde sign){sub q}, C(tilde sign){sub q}, and D(tilde sign){sub q}-series of the q-analog Virasoro-like Lie algebras.
Group representations, error bases and quantum codes
Knill, E
1996-01-01
This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.
Robust regression on noisy data for fusion scaling laws
Verdoolaege, Geert
2014-11-15
We introduce the method of geodesic least squares (GLS) regression for estimating fusion scaling laws. Based on straightforward principles, the method is easily implemented, yet it clearly outperforms established regression techniques, particularly in cases of significant uncertainty on both the response and predictor variables. We apply GLS for estimating the scaling of the L-H power threshold, resulting in estimates for ITER that are somewhat higher than predicted earlier.
TURBULENCE-INDUCED RELATIVE VELOCITY OF DUST PARTICLES. IV. THE COLLISION KERNEL
Pan, Liubin; Padoan, Paolo E-mail: ppadoan@icc.ub.edu
2014-12-20
Motivated by its importance for modeling dust particle growth in protoplanetary disks, we study turbulence-induced collision statistics of inertial particles as a function of the particle friction time, ?{sub p}. We show that turbulent clustering significantly enhances the collision rate for particles of similar sizes with ?{sub p} corresponding to the inertial range of the flow. If the friction time, ?{sub p,} {sub h}, of the larger particle is in the inertial range, the collision kernel per unit cross section increases with increasing friction time, ?{sub p,} {sub l}, of the smaller particle and reaches the maximum at ?{sub p,} {sub l} = ?{sub p,} {sub h}, where the clustering effect peaks. This feature is not captured by the commonly used kernel formula, which neglects the effect of clustering. We argue that turbulent clustering helps alleviate the bouncing barrier problem for planetesimal formation. We also investigate the collision velocity statistics using a collision-rate weighting factor to account for higher collision frequency for particle pairs with larger relative velocity. For ?{sub p,} {sub h} in the inertial range, the rms relative velocity with collision-rate weighting is found to be invariant with ?{sub p,} {sub l} and scales with ?{sub p,} {sub h} roughly as ? ?{sub p,h}{sup 1/2}. The weighting factor favors collisions with larger relative velocity, and including it leads to more destructive and less sticking collisions. We compare two collision kernel formulations based on spherical and cylindrical geometries. The two formulations give consistent results for the collision rate and the collision-rate weighted statistics, except that the spherical formulation predicts more head-on collisions than the cylindrical formulation.
Computing traveltime and amplitude sensitivity kernels in finite-frequency tomography
Tian Yue Montelli, Raffaella; Nolet, Guust; Dahlen, F.A.
2007-10-01
The efficient computation of finite-frequency traveltime and amplitude sensitivity kernels for velocity and attenuation perturbations in global seismic tomography poses problems both of numerical precision and of validity of the paraxial approximation used. We investigate these aspects, using a local model parameterization in the form of a tetrahedral grid with linear interpolation in between grid nodes. The matrix coefficients of the linear inverse problem involve a volume integral of the product of the finite-frequency kernel with the basis functions that represent the linear interpolation. We use local and global tests as well as analytical expressions to test the numerical precision of the frequency and spatial quadrature. There is a trade-off between narrowing the bandpass filter and quadrature accuracy and efficiency. Using a minimum step size of 10 km for S waves and 30 km for SS waves, relative errors in the quadrature are of the order of 1% for direct waves such as S, and a few percent for SS waves, which are below data uncertainties in delay time or amplitude anomaly observations in global seismology. Larger errors may occur wherever the sensitivity extends over a large volume and the paraxial approximation breaks down at large distance from the ray. This is especially noticeable for minimax phases such as SS waves with periods >20 s, when kernels become hyperbolic near the reflection point and appreciable sensitivity extends over thousands of km. Errors becomes intolerable at epicentral distance near the antipode when sensitivity extends over all azimuths in the mantle. Effects of such errors may become noticeable at epicentral distances > 140{sup o}. We conclude that the paraxial approximation offers an efficient method for computing the matrix system for finite-frequency inversions in global tomography, though care should be taken near reflection points, and alternative methods are needed to compute sensitivity near the antipode.
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation
Huang, Hao; Yoo, Shinjae; Yu, Dantong; Qin, Hong
2015-06-01
Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, so it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.
Liu, Derek Sloboda, Ron S.
2014-05-15
Purpose: Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. Methods: The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Results: Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. Conclusions: A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.
Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Huang, Hao; Yoo, Shinjae; Yu, Dantong; Qin, Hong
2015-06-01
Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less
Shell Element Verification & Regression Problems for DYNA3D
Zywicz, E
2008-02-01
A series of quasi-static regression/verification problems were developed for the triangular and quadrilateral shell element formulations contained in Lawrence Livermore National Laboratory's explicit finite element program DYNA3D. Each regression problem imposes both displacement- and force-type boundary conditions to probe the five independent nodal degrees of freedom employed in the targeted formulation. When applicable, the finite element results are compared with small-strain linear-elastic closed-form reference solutions to verify select aspects of the formulations implementation. Although all problems in the suite depict the same geometry, material behavior, and loading conditions, each problem represents a unique combination of shell formulation, stabilization method, and integration rule. Collectively, the thirty-six new regression problems in the test suite cover nine different shell formulations, three hourglass stabilization methods, and three families of through-thickness integration rules.
Geometric representation of fundamental particles' inertial mass
Schachter, L.; Spencer, James
2015-07-22
A geometric representation of the (N = 279) masses of quarks, leptons, hadrons and gauge bosons was introduced by employing a Riemann Sphere facilitating the interpretation of the N masses in terms of a single particle, the Masson, which might be in one of the N eigen-states. Geometrically, its mass is the radius of the Riemann Sphere. Dynamically, its derived mass is near the mass of the nucleon regardless of whether it is determined from all N particles of only the hadrons, the mesons or the baryons separately. Ignoring all the other properties of these particles, it is shown that the eigen-values, the polar representation θ_{ν} of the masses on the Sphere, satisfy the symmetry θ_{ν} + θ_{N+1-ν} = π within less than 1% relative error. In addition, these pair correlations include the pairs θ_{γ} + θ_{top} ≃ π and θ_{gluon} + θ_{H} ≃ π as well as pairing the weak gauge bosons with the three neutrinos.
DYNA3D/ParaDyn Regression Test Suite Inventory
Lin, J I
2011-01-25
The following table constitutes an initial assessment of feature coverage across the regression test suite used for DYNA3D and ParaDyn. It documents the regression test suite at the time of production release 10.1 in September 2010. The columns of the table represent groupings of functionalities, e.g., material models. Each problem in the test suite is represented by a row in the table. All features exercised by the problem are denoted by a check mark in the corresponding column. The definition of ''feature'' has not been subdivided to its smallest unit of user input, e.g., algorithmic parameters specific to a particular type of contact surface. This represents a judgment to provide code developers and users a reasonable impression of feature coverage without expanding the width of the table by several multiples. All regression testing is run in parallel, typically with eight processors. Many are strictly regression tests acting as a check that the codes continue to produce adequately repeatable results as development unfolds, compilers change and platforms are replaced. A subset of the tests represents true verification problems that have been checked against analytical or other benchmark solutions. Users are welcomed to submit documented problems for inclusion in the test suite, especially if they are heavily exercising, and dependent upon, features that are currently underrepresented.
Representation of Limited Rights Data and Restricted Computer Software |
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Department of Energy Representation of Limited Rights Data and Restricted Computer Software Representation of Limited Rights Data and Restricted Computer Software Representation of Limited Rights Data and Restricted Computer Software (44.02 KB) More Documents & Publications CLB-1003.PDF� Intellectual Property Provisions (CSB-1003) Cooperative Agreement Research, Development, or Demonstration Domestic Small Businesses CDLB-1003.PDF�
T-567: Linux Kernel Buffer Overflow in ldm_frag_add() May Let Local Users Gain Elevated Privileges
Broader source: Energy.gov [DOE]
A vulnerability was reported in the Linux Kernel. A local user may be able to obtain elevated privileges on the target system. A physically local user can connect a storage device with a specially crafted LDM partition table to trigger a buffer overflow in the ldm_frag_add() function in 'fs/partitions/ldm.c' and potentially execute arbitrary code with elevated privileges.
From Rays to Structures: Representation and Selection of Void...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
From Rays to Structures: Representation and Selection of Void Structures in Zeolites using Stochastic Methods Previous Next List Andrew J. Jones, Christopher Ostrouchov, Maciej...
Representation of Limited Rights Data and Restricted Computer...
...r... Representation of Limited Rights Data and Restricted Computer Software (a) Any data delivered under an award resulting from this announcement is ...
A kernel-oriented model for coalition-formation in general environments: Implementation and results
Shehory, O.; Kraus, S.
1996-12-31
In this paper we present a model for coalition formation and payoff distribution in general environments. We focus on a reduced complexity kernel-oriented coalition formation model, and provide a detailed algorithm for the activity of the single rational agent. The model is partitioned into a social level and a strategic level, to distinguish between regulations that must be agreed upon and are forced by agent-designers, and strategies by which each agent acts at will. In addition, we present an implementation of the model and simulation results. From these we conclude that implementing the model for coalition formation among agents increases the benefits of the agents with reasonable time consumption. It also shows that more coalition formations yield more benefits to the agents.
Three-dimensional photodissociation in strong laser fields: Memory-kernel effective-mode expansion
Li Xuan; Thanopulos, Ioannis; Shapiro, Moshe
2011-03-15
We introduce a method for the efficient computation of non-Markovian quantum dynamics for strong (and time-dependent) system-bath interactions. The past history of the system dynamics is incorporated by expanding the memory kernel in exponential functions thereby transforming in an exact fashion the non-Markovian integrodifferential equations into a (larger) set of ''effective modes'' differential equations (EMDE). We have devised a method which easily diagonalizes the EMDE, thereby allowing for the efficient construction of an adiabatic basis and the fast propagation of the EMDE in time. We have applied this method to three-dimensional photodissociation of the H{sub 2}{sup +} molecule by strong laser fields. Our calculations properly include resonance-Raman scattering via the continuum, resulting in extensive rotational and vibrational excitations. The calculated final kinetic and angular distribution of the photofragments are in overall excellent agreement with experiments, both when transform-limited pulses and when chirped pulses are used.
A representation formula for maps on supermanifolds
Helein, Frederic [Institut de Mathematiques de Jussieu, UMR 7586, Universite Denis Diderot-Paris 7, Case 7012, 2 place Jussieu, 75251 Paris Cedex 5 (France)
2008-02-15
We analyze the notion of morphisms of rings of superfunctions which is the basic concept underlying the definition of supermanifolds as ringed spaces (i.e., following Berezin, Leites, Manin, etc.). We establish a representation formula for all (pull-back) morphisms from the algebra of functions on an ordinary manifolds to the superalgebra of functions on an open subset of a superspace. We then derive two consequences of this result. The first one is that we can integrate the data associated with a morphism in order to get a (nonunique) map defined on an ordinary space (and uniqueness can be achieved by restriction to a scheme). The second one is a simple and intuitive recipe to compute pull-back images of a function on a manifold M by a map from a superspace to M.
Latest Jurassic-early Cretaceous regressive facies, northeast Africa craton
van Houten, F.B.
1980-06-01
Nonmarine to paralic detrital deposits accumulated in six large basins between Algeria and the Arabo-Nubian shield during major regression in latest Jurassic and Early Cretaceous time. The Ghadames Sirte (north-central Libya), and Northern (Egypt) basins lay along the cratonic margin of northeastern Africa. The Murzuk, Kufra, and Southern (Egypt) basins lay in the south within the craton. Data for reconstructing distribution, facies, and thickness of relevant sequences are adequate for the three northern basins only. High detrital influx near the end of Jurassic time and in mid-Cretaceous time produced regressive nubian facies composed largely of low-sinuosity stream and fahdelta deposits. In the west and southwest the Ghadames, Murzuk, and Kufra basins were filled with a few hundred meters of detritus after long-continued earlier Mesozoic aggradation. In northern Egypt the regressive sequence succeeded earlier Mesozoic marine sedimentation; in the Sirte and Southern basins correlative deposits accumulated on Precambrian and Variscan terranes after earlier Mesozoic uplift and erosion. Waning of detrital influx into southern Tunisia and adjacent Libya in the west and into Israel in the east initiated an Albian to early Cenomanian transgression of Tethys. By late Cenomanian time it had flooded the entire cratonic margin, and spread southward into the Murzuk and Southern basins, as well as onto the Arabo-Nubian shield. Latest Jurassic-earliest Cretaceous, mid-Cretaceous, and Late Cretaceous transgressions across northeastern Africa recorded in these sequences may reflect worldwide eustatic sea-level rises. In contrast, renewed large supply of detritus during each regression and a comparable subsidence history of intracratonic and marginal basins imply regional tectonic control. 6 figures.
Collins, J.L.
2004-12-02
The main objective of the Depleted UO{sub 2} Kernels Production Task at Oak Ridge National Laboratory (ORNL) was to conduct two small-scale production campaigns to produce 2 kg of UO{sub 2} kernels with diameters of 500 {+-} 20 {micro}m and 3.5 kg of UO{sub 2} kernels with diameters of 350 {+-} 10 {micro}m for the U.S. Department of Energy Advanced Fuel Cycle Initiative Program. The final acceptance requirements for the UO{sub 2} kernels are provided in the first section of this report. The kernels were prepared for use by the ORNL Metals and Ceramics Division in a development study to perfect the triisotropic (TRISO) coating process. It was important that the kernels be strong and near theoretical density, with excellent sphericity, minimal surface roughness, and no cracking. This report gives a detailed description of the production efforts and results as well as an in-depth description of the internal gelation process and its chemistry. It describes the laboratory-scale gel-forming apparatus, optimum broth formulation and operating conditions, preparation of the acid-deficient uranyl nitrate stock solution, the system used to provide uniform broth droplet formation and control, and the process of calcining and sintering UO{sub 3} {center_dot} 2H{sub 2}O microspheres to form dense UO{sub 2} kernels. The report also describes improvements and best past practices for uranium kernel formation via the internal gelation process, which utilizes hexamethylenetetramine and urea. Improvements were made in broth formulation and broth droplet formation and control that made it possible in many of the runs in the campaign to produce the desired 350 {+-} 10-{micro}m-diameter kernels, and to obtain very high yields.
On the representation of many-body interactions in water
Medders, Gregory R.; Gotz, Andreas W.; Morales, Miguel A.; Bajaj, Pushp; Paesani, Francesco
2015-09-09
Our recent work has shown that the many-body expansion of the interactionenergy can be used to develop analytical representations of global potential energy surfaces (PESs) for water. In this study, the role of short- and long-range interactions at different orders is investigated by analyzing water potentials that treat the leading terms of the many-body expansion through implicit (i.e., TTM3-F and TTM4-F PESs) and explicit (i.e., WHBB and MB-pol PESs) representations. Moreover, it is found that explicit short-range representations of 2-body and 3-body interactions along with a physically correct incorporation of short- and long-range contributions are necessary for an accurate representation of the waterinteractions from the gas to the condensed phase. Likewise, a complete many-body representation of the dipole moment surface is found to be crucial to reproducing the correct intensities of the infrared spectrum of liquid water.
Graph representation of protein free energy landscape
Li, Minghai; Duan, Mojie; Fan, Jue; Huo, Shuanghong; Han, Li
2013-11-14
The thermodynamics and kinetics of protein folding and protein conformational changes are governed by the underlying free energy landscape. However, the multidimensional nature of the free energy landscape makes it difficult to describe. We propose to use a weighted-graph approach to depict the free energy landscape with the nodes on the graph representing the conformational states and the edge weights reflecting the free energy barriers between the states. Our graph is constructed from a molecular dynamics trajectory and does not involve projecting the multi-dimensional free energy landscape onto a low-dimensional space defined by a few order parameters. The calculation of free energy barriers was based on transition-path theory using the MSMBuilder2 package. We compare our graph with the widely used transition disconnectivity graph (TRDG) which is constructed from the same trajectory and show that our approach gives more accurate description of the free energy landscape than the TRDG approach even though the latter can be organized into a simple tree representation. The weighted-graph is a general approach and can be used on any complex system.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Zhang, Y.; Easter, R. C.; Ghan, S. J.; Abdul-Razzak, H.
2002-11-07
We use a 1-D version of a climate-aerosol-chemistry model with both modal and sectional aerosol size representations to evaluate the impact of aerosol size representation on modeling aerosol-cloud interactions in shallow stratiform clouds observed during the 2nd Aerosol Characterization Experiment. Both the modal (with prognostic aerosol number and mass or prognostic aerosol number, surface area and mass, referred to as the Modal-NM and Modal-NSM) and the sectional approaches (with 12 and 36 sections) predict total number and mass for interstitial and activated particles that are generally within several percent of references from a high resolution 108-section approach. The modal approachmore » with prognostic aerosol mass but diagnostic number (referred to as the Modal-M) cannot accurately predict the total particle number and surface areas, with deviations from the references ranging from 7-161%. The particle size distributions are sensitive to size representations, with normalized absolute differences of up to 12% and 37% for the 36- and 12-section approaches, and 30%, 39%, and 179% for the Modal-NSM, Modal-NM, and Modal-M, respectively. For the Modal-NSM and Modal-NM, differences from the references are primarily due to the inherent assumptions and limitations of the modal approach. In particular, they cannot resolve the abrupt size transition between the interstitial and activated aerosol fractions. For the 12- and 36-section approaches, differences are largely due to limitations of the parameterized activation for non-log-normal size distributions, plus the coarse resolution for the 12-section case. Differences are larger both with higher aerosol (i.e., less complete activation) and higher SO2 concentrations (i.e., greater modification of the initial aerosol distribution).« less
Sinc function representation and three-loop master diagrams
Easther, Richard; Guralnik, Gerald; Hahn, Stephen
2001-04-15
We test the Sinc function representation, a novel method for numerically evaluating Feynman diagrams, by using it to evaluate the three-loop master diagrams. Analytical results have been obtained for all these diagrams, and we find excellent agreement between our calculations and the exact values. The Sinc function representation converges rapidly, and it is straightforward to obtain accuracies of 1 part in 10{sup 6} for these diagrams and with longer runs we found results better than 1 part in 10{sup 12}. Finally, this paper extends the Sinc function representation to diagrams containing massless propagators.
CHARACTERISTIC SIZE OF FLARE KERNELS IN THE VISIBLE AND NEAR-INFRARED CONTINUA
Xu, Yan; Jing, Ju; Wang, Haimin; Cao, Wenda
2012-05-01
In this Letter, we present a new approach to estimate the formation height of visible and near-infrared emission of an X10 flare. The sizes of flare emission cores in three wavelengths are accurately measured during the peak of the flare. The source size is the largest in the G band at 4308 A and shrinks toward longer wavelengths, namely the green continuum at 5200 A and NIR at 15600 A, where the emission is believed to originate from the deeper atmosphere. This size-wavelength variation is likely explained by the direct heating model as electrons need to move along converging field lines from the corona to the photosphere. Therefore, one can observe the smallest source, which in our case is 0.''65 {+-} 0.''02 in the bottom layer (represented by NIR), and observe relatively larger kernels in upper layers of 1.''03 {+-} 0.''14 and 1.''96 {+-} 0.''27, using the green continuum and G band, respectively. We then compare the source sizes with a simple magnetic geometry to derive the formation height of the white-light sources and magnetic pressure in different layers inside the flare loop.
Regression analysis study on the carbon dioxide capture process
Zhou, Q.; Chan, C.W.; Tontiwachiwuthikul, P.
2008-07-15
Research on amine-based carbon dioxide (CO{sub 2}) capture has mainly focused on improving the effectiveness and efficiency of the CO{sub 2} capture process. The objective of our work is to explore relationships among key parameters that affect the CO{sub 2} production rate. From a survey of relevant literature, we observed that the significant parameters influencing the CO{sub 2} production rate include the reboiler heat duty, solvent concentration, solvent circulation rate, and CO{sub 2} lean loading. While it is widely recognized that these parameters are related, the exact nature of the relationships are unknown. This paper presents a regression study conducted with data collected at the International Test Center for CO{sub 2} capture (ITC) located at University of Regina, Saskatchewan, Canada. The regression technique was applied to a data set consisting of data on 113 days of operation of the CO{sub 2} capture plant, and four mathematical models of the key parameters have been developed. The models can be used for predicting the performance of the plant when changes occur in the process. By manipulation of the parameter values, the efficiency of the CO{sub 2} capture process can be improved.
Impact of aerosol size representation on modeling aerosol-cloud...
Office of Scientific and Technical Information (OSTI)
SciTech Connect Search Results Journal Article: Impact of aerosol size representation on ... OSTI Identifier: 15003527 Report Number(s): PNWD-SA--5600 Journal ID: ISSN 0148-0227 ...
Simple Model Representations of Transport in a Complex Fracture...
Office of Scientific and Technical Information (OSTI)
Effects on Long-Term Predictions Citation Details In-Document Search Title: Simple Model Representations of Transport in a Complex Fracture and Their Effects on Long-Term ...
Representation of Limited Rights Data and Restricted Computer Software |
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Department of Energy Representation of Limited Rights Data and Restricted Computer Software Representation of Limited Rights Data and Restricted Computer Software Any data delivered under an award resulting from this announcement is subject to the Rights in Data - General or the Rights in Data - Programs Covered Under Special Data Statutes clause (See Intellectual Property Provisions). Under these clauses, the Recipient may withhold from delivery data that qualify as limited rights data or
Highest-weight representations of Brocherd`s algebras
Slansky, R.
1997-01-01
General features of highest-weight representations of Borcherd`s algebras are described. to show their typical features, several representations of Borcherd`s extensions of finite-dimensional algebras are analyzed. Then the example of the extension of affine- su(2) to a Borcherd`s algebra is examined. These algebras provide a natural way to extend a Kac-Moody algebra to include the hamiltonian and number-changing operators in a generalized symmetry structure.
A Visual Analytics Approach for Correlation, Classification, and Regression Analysis
Steed, Chad A; SwanII, J. Edward; Fitzpatrick, Patrick J.; Jankun-Kelly, T.J.
2012-02-01
New approaches that combine the strengths of humans and machines are necessary to equip analysts with the proper tools for exploring today's increasing complex, multivariate data sets. In this paper, a novel visual data mining framework, called the Multidimensional Data eXplorer (MDX), is described that addresses the challenges of today's data by combining automated statistical analytics with a highly interactive parallel coordinates based canvas. In addition to several intuitive interaction capabilities, this framework offers a rich set of graphical statistical indicators, interactive regression analysis, visual correlation mining, automated axis arrangements and filtering, and data classification techniques. The current work provides a detailed description of the system as well as a discussion of key design aspects and critical feedback from domain experts.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gonis, A.; Zhang, X. G.; Stocks, G. M.; Nicholson, D. M.
2015-10-23
Density functional theory for the case of general, N-representable densities is reformulated in terms of density functional derivatives of expectation values of operators evaluated with wave functions leading to a density, making no reference to the concept of potential. The developments provide a complete solution of the v-representability problem by establishing a mathematical procedure that determines whether a density is v-representable and in the case of an affirmative answer determines the potential (within an additive constant) as a derivative with respect to the density of a constrained search functional. It also establishes the existence of an energy functional of themore » density that, for v-representable densities, assumes its minimum value at the density describing the ground state of an interacting many-particle system. The theorems of Hohenberg and Kohn emerge as special cases of the formalism.« less
Gonis, A.; Zhang, X. G.; Stocks, G. M.; Nicholson, D. M.
2015-10-23
Density functional theory for the case of general, N-representable densities is reformulated in terms of density functional derivatives of expectation values of operators evaluated with wave functions leading to a density, making no reference to the concept of potential. The developments provide a complete solution of the v-representability problem by establishing a mathematical procedure that determines whether a density is v-representable and in the case of an affirmative answer determines the potential (within an additive constant) as a derivative with respect to the density of a constrained search functional. It also establishes the existence of an energy functional of the density that, for v-representable densities, assumes its minimum value at the density describing the ground state of an interacting many-particle system. The theorems of Hohenberg and Kohn emerge as special cases of the formalism.
Contescu, Cristian I
2006-01-01
This report supports the effort for development of small scale fabrication of UCO (a mixture of UO{sub 2} and UC{sub 2}) fuel kernels for the generation IV high temperature gas reactor program. In particular, it is focused on optimization of dispersion conditions of carbon black in the broths from which carbon-containing (UO{sub 2} {center_dot} H{sub 2}O + C) gel spheres are prepared by internal gelation. The broth results from mixing a hexamethylenetetramine (HMTA) and urea solution with an acid-deficient uranyl nitrate (ADUN) solution. Carbon black, which is previously added to one or other of the components, must stay dispersed during gelation. The report provides a detailed description of characterization efforts and results, aimed at identification and testing carbon black and surfactant combinations that would produce stable dispersions, with carbon particle sizes below 1 {micro}m, in aqueous HMTA/urea and ADUN solutions. A battery of characterization methods was used to identify the properties affecting the water dispersability of carbon blacks, such as surface area, aggregate morphology, volatile content, and, most importantly, surface chemistry. The report introduces the basic principles for each physical or chemical method of carbon black characterization, lists the results obtained, and underlines cross-correlations between methods. Particular attention is given to a newly developed method for characterization of surface chemical groups on carbons in terms of their acid-base properties (pK{sub a} spectra) based on potentiometric titration. Fourier-transform infrared (FTIR) spectroscopy was used to confirm the identity of surfactants, both ionic and non-ionic. In addition, background information on carbon black properties and the mechanism by which surfactants disperse carbon black in water is also provided. A list of main physical and chemical properties characterized, samples analyzed, and results obtained, as well as information on the desired trend or
OPTICAL SPECTRAL OBSERVATIONS OF A FLICKERING WHITE-LIGHT KERNEL IN A C1 SOLAR FLARE
Kowalski, Adam F.; Cauzzi, Gianna; Fletcher, Lyndsay
2015-01-10
We analyze optical spectra of a two-ribbon, long-duration C1.1 flare that occurred on 2011 August 18 within AR 11271 (SOL2011-08-18T15:15). The impulsive phase of the flare was observed with a comprehensive set of space-borne and ground-based instruments, which provide a range of unique diagnostics of the lower flaring atmosphere. Here we report the detection of enhanced continuum emission, observed in low-resolution spectra from 3600 to 4550 acquired with the Horizontal Spectrograph at the Dunn Solar Telescope. A small, ?0.''5 (10{sup 15}cm{sup 2}) penumbral/umbral kernel brightens repeatedly in the optical continuum and chromospheric emission lines, similar to the temporal characteristics of the hard X-ray variation as detected by the Gamma-ray Burst Monitor on the Fermi spacecraft. Radiative-hydrodynamic flare models that employ a nonthermal electron beam energy flux high enough to produce the optical contrast in our flare spectra would predict a large Balmer jump in emission, indicative of hydrogen recombination radiation from the upper flare chromosphere. However, we find no evidence of such a Balmer jump in the bluemost spectral region of the continuum excess. Just redward of the expected Balmer jump, we find evidence of a ''blue continuum bump'' in the excess emission which may be indicative of the merging of the higher order Balmer lines. The large number of observational constraints provides a springboard for modeling the blue/optical emission for this particular flare with radiative-hydrodynamic codes, which are necessary to understand the opacity effects for the continuum and emission line radiation at these wavelengths.
Support Vector Machine algorithm for regression and classification
Energy Science and Technology Software Center (OSTI)
2001-08-01
The software is an implementation of the Support Vector Machine (SVM) algorithm that was invented and developed by Vladimir Vapnik and his co-workers at AT&T Bell Laboratories. The specific implementation reported here is an Active Set method for solving a quadratic optimization problem that forms the major part of any SVM program. The implementation is tuned to specific constraints generated in the SVM learning. Thus, it is more efficient than general-purpose quadratic optimization programs. Amore » decomposition method has been implemented in the software that enables processing large data sets. The size of the learning data is virtually unlimited by the capacity of the computer physical memory. The software is flexible and extensible. Two upper bounds are implemented to regulate the SVM learning for classification, which allow users to adjust the false positive and false negative rates. The software can be used either as a standalone, general-purpose SVM regression or classification program, or be embedded into a larger software system.« less
Patrick, Christopher E. Thygesen, Kristian S.
2015-09-14
We present calculations of the correlation energies of crystalline solids and isolated systems within the adiabatic-connection fluctuation-dissipation formulation of density-functional theory. We perform a quantitative comparison of a set of model exchange-correlation kernels originally derived for the homogeneous electron gas (HEG), including the recently introduced renormalized adiabatic local-density approximation (rALDA) and also kernels which (a) satisfy known exact limits of the HEG, (b) carry a frequency dependence, or (c) display a 1/k{sup 2} divergence for small wavevectors. After generalizing the kernels to inhomogeneous systems through a reciprocal-space averaging procedure, we calculate the lattice constants and bulk moduli of a test set of 10 solids consisting of tetrahedrally bonded semiconductors (C, Si, SiC), ionic compounds (MgO, LiCl, LiF), and metals (Al, Na, Cu, Pd). We also consider the atomization energy of the H{sub 2} molecule. We compare the results calculated with different kernels to those obtained from the random-phase approximation (RPA) and to experimental measurements. We demonstrate that the model kernels correct the RPA’s tendency to overestimate the magnitude of the correlation energy whilst maintaining a high-accuracy description of structural properties.
Bansal, R.M.; Kothari, L.S.; Tewari, S.P.
1980-10-01
A new scattering kernel for heavy water has been proposed. The kernel takes into account the chemical binding energy effects and also includes the rotational and intramolecular vibrational modes. Using this scattering kernel, various neutron transport processes in the temperature range 5 to 60/sup 0/C have been studied and compared with the corresponding experimental results. The calculated results include total neutron scattering cross section at 20/sup 0/C; asymptotic decay of neutron pulses in the temperature range 5 to 60/sup 0/C and temperature variation of the diffusion coefficient and diffusion cooling coefficient; timedependent spectra inside finite-sized assemblies of heavy water at 20 and 43.3/sup 0/C thermalization time; and diffusion length and space-dependent study in pure and poisoned assemblies of heavy water. The calculated results are in good agreement with the experimental results. At some places notable differences are observed between the results obtained using our scattering kernel and those based on the Honeck kernel.
Regression analysis of technical parameters affecting nuclear power plant performances
Ghazy, R.; Ricotti, M. E.; Trueco, P.
2012-07-01
Since the 80's many studies have been conducted in order to explicate good and bad performances of commercial nuclear power plants (NPPs), but yet no defined correlation has been found out to be totally representative of plant operational experience. In early works, data availability and the number of operating power stations were both limited; therefore, results showed that specific technical characteristics of NPPs were supposed to be the main causal factors for successful plant operation. Although these aspects keep on assuming a significant role, later studies and observations showed that other factors concerning management and organization of the plant could instead be predominant comparing utilities operational and economic results. Utility quality, in a word, can be used to summarize all the managerial and operational aspects that seem to be effective in determining plant performance. In this paper operational data of a consistent sample of commercial nuclear power stations, out of the total 433 operating NPPs, are analyzed, mainly focusing on the last decade operational experience. The sample consists of PWR and BWR technology, operated by utilities located in different countries, including U.S. (Japan)) (France)) (Germany)) and Finland. Multivariate regression is performed using Unit Capability Factor (UCF) as the dependent variable; this factor reflects indeed the effectiveness of plant programs and practices in maximizing the available electrical generation and consequently provides an overall indication of how well plants are operated and maintained. Aspects that may not be real causal factors but which can have a consistent impact on the UCF, as technology design, supplier, size and age, are included in the analysis as independent variables. (authors)
On the representation of many-body interactions in water
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Medders, Gregory R.; Gotz, Andreas W.; Morales, Miguel A.; Bajaj, Pushp; Paesani, Francesco
2015-09-09
Our recent work has shown that the many-body expansion of the interactionenergy can be used to develop analytical representations of global potential energy surfaces (PESs) for water. In this study, the role of short- and long-range interactions at different orders is investigated by analyzing water potentials that treat the leading terms of the many-body expansion through implicit (i.e., TTM3-F and TTM4-F PESs) and explicit (i.e., WHBB and MB-pol PESs) representations. Moreover, it is found that explicit short-range representations of 2-body and 3-body interactions along with a physically correct incorporation of short- and long-range contributions are necessary for an accurate representationmore » of the waterinteractions from the gas to the condensed phase. Likewise, a complete many-body representation of the dipole moment surface is found to be crucial to reproducing the correct intensities of the infrared spectrum of liquid water.« less
Category of trees in representation theory of quantum algebras
Moskaliuk, N. M.; Moskaliuk, S. S.
2013-10-15
New applications of categorical methods are connected with new additional structures on categories. One of such structures in representation theory of quantum algebras, the category of Kuznetsov-Smorodinsky-Vilenkin-Smirnov (KSVS) trees, is constructed, whose objects are finite rooted KSVS trees and morphisms generated by the transition from a KSVS tree to another one.
Discrete physics: Practice, representation and rules of correspondence
Noyes, H.P.
1988-07-01
We make a brief historical review of some aspects of modern physics which we find most significant in our own endeavor. We discuss the ''Yukawa Vertices'' of elementary particle theory as used in laboratory practice, second quantized field theory, analytic S-Matrix theory and in our own approach. We review the conserved quantum numbers in the Standard Model of quarks and leptons. This concludes our presentation of the ''E-frame.'' We try to develop a self-consistent representation of our theory. We have already claimed that this approach provides a discrete reconciliation between the formal (representational) aspects of quantum mechanics and relativity. Also discussed are rules of correspondence connecting the formalism to the practice of physics by using the counter paradigm and event-based coordinates to construct relativistic quantum mechanics in a new way. 31 refs., 12 figs., 1 tab.
Broader source: Energy.gov [DOE]
The Environmental Management Site-Specific Advisory Board recommends that DOE develop graphic representations of waste disposition paths.
The Institute for Public Representation, on behalf of the Potomac
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Riverkeeper, Inc., the Patuxent Riverkeeper, and the Anacostia Riverkeeper at Earth Conservation Corps Comments on Department of Energy's Special Environmental Analysis Regarding Operatio | Department of Energy The Institute for Public Representation, on behalf of the Potomac Riverkeeper, Inc., the Patuxent Riverkeeper, and the Anacostia Riverkeeper at Earth Conservation Corps Comments on Department of Energy's Special Environmental Analysis Regarding Operatio The Institute for Public
Representation of Limited Rights Data and Restricted Computer Software
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
REPRESENTATION OF LIMITED RIGHTS DATA AND RESTRICTED COMPUTER SOFTWARE Applicant: Funding Opportunity Announcement/Solicitation No.: (a) Any data delivered under an award resulting from this announcement is subject to the Rights in Data - General or the Rights in Data - Programs Covered Under Special Data Statutes clause (See Intellectual Property Provisions). Under these clauses, the Recipient may withhold from delivery data that qualify as limited rights data or restricted computer software.
Representation of analysis results involving aleatory and epistemic uncertainty.
Johnson, Jay Dean; Helton, Jon Craig; Oberkampf, William Louis; Sallaberry, Cedric J.
2008-08-01
Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behavior of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary cumulative distribution functions (CCDFs) for analysis results of interest. Several mathematical structures are available for the representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (i.e., interval analysis, possibility theory, evidence theory, probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterizations of epistemic uncertainty.
EPACT Representation for Covered Awards Over $100,000 | Department of
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Energy EPACT Representation for Covered Awards Over $100,000 EPACT Representation for Covered Awards Over $100,000 EPACT Representation (65.06 KB) More Documents & Publications 2007 Annual Plan 2007 Annual Plan for the Ultra-Deepwater and Unconventional Natural Gas and Other Petroleum Resources Research and Development Program 2008 Annual Plan
Bag of Lines (BoL) for Improved Aerial Scene Representation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Sridharan, Harini; Cheriyadat, Anil M.
2014-09-22
Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less
Regression Models for Demand Reduction based on Cluster Analysis of Load Profiles
Yamaguchi, Nobuyuki; Han, Junqiao; Ghatikar, Girish; Piette, Mary Ann; Asano, Hiroshi; Kiliccote, Sila
2009-06-28
This paper provides new regression models for demand reduction of Demand Response programs for the purpose of ex ante evaluation of the programs and screening for recruiting customer enrollment into the programs. The proposed regression models employ load sensitivity to outside air temperature and representative load pattern derived from cluster analysis of customer baseline load as explanatory variables. The proposed models examined their performances from the viewpoint of validity of explanatory variables and fitness of regressions, using actual load profile data of Pacific Gas and Electric Company's commercial and industrial customers who participated in the 2008 Critical Peak Pricing program including Manual and Automated Demand Response.
Review of structure representation and reconstruction on mesoscale and microscale
Li, Dongsheng
2014-05-01
Structure representation and reconstruction on mesoscale and microscale is critical in material design, advanced manufacturing and multiscale modeling. Microstructure reconstruction has been applied in different areas of materials science and technology, structural materials, energy materials, geology, hydrology, etc. This review summarizes the microstructure descriptors and formulations used to represent and algorithms to reconstruct structures at microscale and mesoscale. In the stochastic methods using correlation function, different optimization approaches have been adapted for objective function minimization. A variety of reconstruction approaches are compared in efficiency and accuracy.
Direct Angular Representation Monte Carlo Code for Criticality Safety Analysis
Energy Science and Technology Software Center (OSTI)
1988-01-01
Version 00 MKENO-DAR calculates the effective neutron multiplication factor and neutron flux distribution in a three dimensional media, solving multigroup neutron transport equation with a precise angular distribution function for neutron scattering. MKENO-DAR was developed from CCC-492/MULTI-KENO which was developed from KENO-IV. MULTI-KENO divides the system into many subsystem SUPER BOXES where the size of BOX TYPEs in each SUPER BOX can be selected independently. MKENO-DAR improves the representation of scattering angle over that inmore » MULTI-KENO.« less
Modeling Personalized Email Prioritization: Classification-based and Regression-based Approaches
Yoo S.; Yang, Y.; Carbonell, J.
2011-10-24
Email overload, even after spam filtering, presents a serious productivity challenge for busy professionals and executives. One solution is automated prioritization of incoming emails to ensure the most important are read and processed quickly, while others are processed later as/if time permits in declining priority levels. This paper presents a study of machine learning approaches to email prioritization into discrete levels, comparing ordinal regression versus classier cascades. Given the ordinal nature of discrete email priority levels, SVM ordinal regression would be expected to perform well, but surprisingly a cascade of SVM classifiers significantly outperforms ordinal regression for email prioritization. In contrast, SVM regression performs well -- better than classifiers -- on selected UCI data sets. This unexpected performance inversion is analyzed and results are presented, providing core functionality for email prioritization systems.
Deng, Yangyang; Parajuli, Prem B.
2011-08-10
Evaluation of economic feasibility of a bio-gasification facility needs understanding of its unit cost under different production capacities. The objective of this study was to evaluate the unit cost of syngas production at capacities from 60 through 1800Nm 3/h using an economic model with three regression analysis techniques (simple regression, reciprocal regression, and log-log regression). The preliminary result of this study showed that reciprocal regression analysis technique had the best fit curve between per unit cost and production capacity, with sum of error squares (SES) lower than 0.001 and coefficient of determination of (R 2) 0.996. The regression analysis techniques determined the minimum unit cost of syngas production for micro-scale bio-gasification facilities of $0.052/Nm 3, under the capacity of 2,880 Nm 3/h. The results of this study suggest that to reduce cost, facilities should run at a high production capacity. In addition, the contribution of this technique could be the new categorical criterion to evaluate micro-scale bio-gasification facility from the perspective of economic analysis.
Improving the representation of hydrologic processes in Earth System Models
Clark, Martyn P.; Fan, Ying; Lawrence, David M.; Adam, J. C.; Bolster, Diogo; Gochis, David; Hooper, Richard P.; Kumar, Mukesh; Leung, Lai-Yung R.; Mackay, D. Scott; Maxwell, Reed M.; Shen, Chaopeng; Swenson, Sean C.; Zeng, Xubin
2015-08-21
Many of the scientific and societal challenges in understanding and preparing for global environmental change rest upon our ability to understand and predict the water cycle change at large river basin, continent, and global scales. However, current large-scale models, such as the land components of Earth System Models (ESMs), do not yet represent the terrestrial water cycle in a fully integrated manner or resolve the finer-scale processes that can dominate large-scale water budgets. This paper reviews the current representation of hydrologic processes in ESMs and identifies the key opportunities for improvement. This review suggests that (1) the development of ESMs has not kept pace with modeling advances in hydrology, both through neglecting key processes (e.g., groundwater) and neglecting key aspects of spatial variability and hydrologic connectivity; and (2) many modeling advances in hydrology can readily be incorporated into ESMs and substantially improve predictions of the water cycle. Accelerating modeling advances in ESMs requires comprehensive hydrologic benchmarking activities, in order to systematically evaluate competing modeling alternatives, understand model weaknesses, and prioritize model development needs. This demands stronger collaboration, both through greater engagement of hydrologists in ESM development and through more detailed evaluation of ESM processes in research watersheds. Advances in the representation of hydrologic process in ESMs can substantially improve energy, carbon and nutrient cycle prediction capabilities through the fundamental role the water cycle plays in regulating these cycles.
Representation of integral dispersion relations by local forms
Ferreira, Erasmo; Sesma, Javier
2008-03-15
The representation of the usual integral dispersion relations (IDRs) of scattering theory through series of derivatives of the amplitudes is discussed, extended, simplified, and confirmed as mathematical identities. Forms of derivative dispersion relations (DDRs) valid for the whole energy interval, recently obtained and presented as double infinite series, are simplified through the use of new sum rules of the incomplete {gamma} functions, being reduced to single summations, where the usual convergence criteria are easily applied. For the forms of the imaginary amplitude used in phenomenology of hadronic scattering at high energies, we show that expressions for the DDR can represent, with absolute accuracy, the IDR of scattering theory, as true mathematical identities. Besides the fact that the algebraic manipulation can be easily understood, numerical examples show the accuracy of these representations up to the maximum available machine precision. As consequence of our work, it is concluded that the standard forms, sDDR, originally intended for high energy limits are an inconvenient and incomplete separation of terms of the full expression, leading to wrong evaluations. Since the correspondence between IDR and the DDR expansions is linear, our results have wide applicability, covering more general functions, built as combinations of well studied basic forms.
Braids as a representation space of SU(5)
Cartin, Daniel
2015-06-15
The standard model of particle physics provides very accurate predictions of phenomena occurring at the sub-atomic level, but the reason for the choice of symmetry group and the large number of particles considered elementary is still unknown. Along the lines of previous preon models positing a substructure to explain these aspects, Bilson-Thompson showed how the first family of elementary particles is realized as the crossings of braids made of three strands, with charges resulting from twists of those strands with certain conditions; in this topological model, there are only two distinct neutrino states. Modeling the particles as braids implies these braids must be the representation space of a Lie algebra, giving the symmetries of the standard model. In this paper, this representation is made explicit, obtaining the raising operators associated with the Lie algebra of SU(5), one of the earliest grand unified theories. Because the braids form a group, the action of these operators are braids themselves, leading to their identification as gauge bosons. Possible choices for the other two families are also given. Although this realization of particles as braids is lacking a dynamical framework, it is very suggestive, especially when considered as a natural method of adding matter to loop quantum gravity.
A role of chemical kinetics in the simulation of the reaction kernel of methane jet diffusion flames
Takahashi, Fumiaki; Katta, V.R.
1999-07-01
The detailed structure of the stabilizing region of an axisymmetric laminar methane jet diffusion flame has been studied numerically. Computations using a time-dependent, implicit, third-order accurate numerical scheme with buoyancy effects were performed using two different C{sub 2}-chemistry models and compared with the previous results using a C{sub 1}-chemistry model. The results were nearly identical for all kinetic models except that the C{sub 1}-chemistry model over-predicted the methyl-radical and formaldehyde concentrations on the fuel side of the flame and that the standoff distance of the flame base from the burner rim varied. The standoff distance was sensitive to the CH{sub 3} + H + (M) {yields} CH{sub 4} + (M) reaction. The highest reactivity spot (reaction kernel) was formed in the relatively low-temperature (<1,600 K) flame base, where the CH{sub 3} + O {yields} CH{sub 2}O + H reaction predominantly contributed to the heat release, providing a stationary ignition source to incoming reactants and thereby stabilizing the trailing diffusion flame.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Collins, John; Rogers, Ted
2015-04-01
There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part ofmore » the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.« less
Collins, John; Rogers, Ted
2015-04-01
There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part of the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.
Kornilov, Oleg; Toennies, J. Peter
2015-02-21
The size distribution of para-H{sub 2} (pH{sub 2}) clusters produced in free jet expansions at a source temperature of T{sub 0} = 29.5 K and pressures of P{sub 0} = 0.91.96 bars is reported and analyzed according to a cluster growth model based on the Smoluchowski theory with kernel scaling. Good overall agreement is found between the measured and predicted, N{sub k} = A?k{sup a} e{sup ?bk}, shape of the distribution. The fit yields values for A and b for values of a derived from simple collision models. The small remaining deviations between measured abundances and theory imply a (pH{sub 2}){sub k} magic number cluster of k = 13 as has been observed previously by Raman spectroscopy. The predicted linear dependence of b{sup ?(a+1)} on source gas pressure was verified and used to determine the value of the basic effective agglomeration reaction rate constant. A comparison of the corresponding effective growth cross sections ?{sub 11} with results from a similar analysis of He cluster size distributions indicates that the latter are much larger by a factor 6-10. An analysis of the three body recombination rates, the geometric sizes and the fact that the He clusters are liquid independent of their size can explain the larger cross sections found for He.
Method of Equivalencing for a Large Wind Power Plant with Multiple Turbine Representation:
Muljadi, E; Pasupulati, S.; Ellis, A.; Kosterov, D.
2008-07-01
This paper focuses on efforts to develop an equivalent representation of a Wind Power Plant (WPP) collector system for power system planning studies.
Method of Equivalencing for a Large Wind Power Plant with Multiple Turbine Representation: Preprint
Muljadi, E.; Pasupulati, S.; Ellis, A.; Kosterov, D.
2008-07-01
This paper focuses on our effort to develop an equivalent representation of a Wind Power Plant collector system for power system planning studies.
Request for Proposal No. DE-SOL-0007749 PART IV - REPRESENTATIONS...
National Nuclear Security Administration (NNSA)
... Computer Software. (d) The offeror has completed the annual representations and certifications electronically via the SAM Web site accessed through https:www.acquisition.gov . ...
Orbit-product representation and correction of Gaussian belief propagation
Johnson, Jason K; Chertkov, Michael; Chernyak, Vladimir
2009-01-01
We present a new interpretation of Gaussian belief propagation (GaBP) based on the 'zeta function' representation of the determinant as a product over orbits of a graph. We show that GaBP captures back-tracking orbits of the graph and consider how to correct this estimate by accounting for non-backtracking orbits. We show that the product over non-backtracking orbits may be interpreted as the determinant of the non-backtracking adjacency matrix of the graph with edge weights based on the solution of GaBP. An efficient method is proposed to compute a truncated correction factor including all non-backtracking orbits up to a specified length.
Graphics processing units accelerated semiclassical initial value representation molecular dynamics
Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele
2014-05-07
This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.
Local representation of the electronic dielectric response function
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Lu, Deyu; Ge, Xiaochuan
2015-12-11
We present a local representation of the electronic dielectric response function, based on a spatial partition of the dielectric response into contributions from each occupied Wannier orbital using a generalized density functional perturbation theory. This procedure is fully ab initio, and therefore allows us to rigorously define local metrics, such as “bond polarizability,” on Wannier centers. We show that the locality of the bare response function is determined by the locality of three quantities: Wannier functions of the occupied manifold, the density matrix, and the Hamiltonian matrix. Furthermore, in systems with a gap, the bare dielectric response is exponentially localized,more » which supports the physical picture of the dielectric response function as a collection of interacting local responses that can be captured by a tight-binding model.« less
Enhancement of Solar Energy Representation in the GCAM Model
Smith, Steven J.; Volke, April C.; Delgado Arias, Sabrina
2010-02-01
The representation of solar technologies in a research version of the GCAM (formerly MiniCAM) integrated assessment model have been enhanced to add technologies, improve the underlying data, and improve the interaction with the rest of the model. We find that the largest potential impact from the inclusion of thermal Concentrating Solar Power plants, which supply a substantial portion of electric generation in sunny regions of the world. Drawing on NREL research, domestic Solar Hot Water technologies have also been added in the United States region where this technology competes with conventional electric and gas technologies. PV technologies are as implemented in the CCTP scenarios, drawing on NREL cost curves for the United States, extrapolated to other world regions using a spatial analysis of population and solar resources.
Coupling coefficients for tensor product representations of quantum SU(2)
Groenevelt, Wolter
2014-10-15
We study tensor products of infinite dimensional irreducible {sup *}-representations (not corepresentations) of the SU(2) quantum group. We obtain (generalized) eigenvectors of certain self-adjoint elements using spectral analysis of Jacobi operators associated to well-known q-hypergeometric orthogonal polynomials. We also compute coupling coefficients between different eigenvectors corresponding to the same eigenvalue. Since the continuous spectrum has multiplicity two, the corresponding coupling coefficients can be considered as 2 2-matrix-valued orthogonal functions. We compute explicitly the matrix elements of these functions. The coupling coefficients can be considered as q-analogs of Bessel functions. As a results we obtain several q-integral identities involving q-hypergeometric orthogonal polynomials and q-Bessel-type functions.
Local representation of the electronic dielectric response function
Lu, Deyu; Ge, Xiaochuan
2015-12-11
We present a local representation of the electronic dielectric response function, based on a spatial partition of the dielectric response into contributions from each occupied Wannier orbital using a generalized density functional perturbation theory. This procedure is fully ab initio, and therefore allows us to rigorously define local metrics, such as “bond polarizability,” on Wannier centers. We show that the locality of the bare response function is determined by the locality of three quantities: Wannier functions of the occupied manifold, the density matrix, and the Hamiltonian matrix. Furthermore, in systems with a gap, the bare dielectric response is exponentially localized, which supports the physical picture of the dielectric response function as a collection of interacting local responses that can be captured by a tight-binding model.
Kusumawati, Intan; Marwoto, Putut Linuwih, Suharto
2015-09-30
The ability of multi representation has been widely studied, but there has been no implementation through a model of learning. This study aimed to determine the ability of the students multi representation, relationships multi representation capabilities and oral communication skills, as well as the application of the relations between the two capabilities through learning model Presentatif Based on Multi representation (PBM) in solving optical geometric (Elementary Physics II). A concurrent mixed methods research methods with qualitative–quantitative weights. Means of collecting data in the form of the pre-test and post-test with essay form, observation sheets oral communication skills, and assessment of learning by observation sheet PBM–learning models all have a high degree of respectively validity category is 3.91; 4.22; 4.13; 3.88. Test reliability with Alpha Cronbach technique, reliability coefficient of 0.494. The students are department of Physics Education Unnes as a research subject. Sequence multi representation tendency of students from high to low in sequence, representation of M, D, G, V; whereas the order of accuracy, the group representation V, D, G, M. Relationship multi representation ability and oral communication skills, comparable/proportional. Implementation conjunction generate grounded theory. This study should be applied to the physics of matter, or any other university for comparison.
SEP Request for Approval Form 3 - Other Complex Regression Model Rationale
Broader source: Energy.gov (indexed) [DOE]
| Department of Energy 3_Other-Complex-Regression-Model-Rationale.docx (36.53 KB) More Documents & Publications Superior Energy Performance Enrollment and Application Forms SEP Request for Approval Form 5 - Model Does Not Satisfy 3.4.1-3.4.10 Requirements SEP Request for Approval Form 1 - Modeling of Data at Finer Intervals than Weekly
STUDIES IN ASTRONOMICAL TIME SERIES ANALYSIS. VI. BAYESIAN BLOCK REPRESENTATIONS
Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James
2013-02-20
This paper addresses the problem of detecting and characterizing local variability in time series and other forms of sequential data. The goal is to identify and characterize statistically significant variations, at the same time suppressing the inevitable corrupting observational errors. We present a simple nonparametric modeling technique and an algorithm implementing it-an improved and generalized version of Bayesian Blocks-that finds the optimal segmentation of the data in the observation interval. The structure of the algorithm allows it to be used in either a real-time trigger mode, or a retrospective mode. Maximum likelihood or marginal posterior functions to measure model fitness are presented for events, binned counts, and measurements at arbitrary times with known error distributions. Problems addressed include those connected with data gaps, variable exposure, extension to piecewise linear and piecewise exponential representations, multivariate time series data, analysis of variance, data on the circle, other data modes, and dispersed data. Simulations provide evidence that the detection efficiency for weak signals is close to a theoretical asymptotic limit derived by Arias-Castro et al. In the spirit of Reproducible Research all of the code and data necessary to reproduce all of the figures in this paper are included as supplementary material.
Knowledge Representation Issues in Semantic Graphs for Relationship Detection
Barthelemy, M; Chow, E; Eliassi-Rad, T
2005-02-02
An important task for Homeland Security is the prediction of threat vulnerabilities, such as through the detection of relationships between seemingly disjoint entities. A structure used for this task is a ''semantic graph'', also known as a ''relational data graph'' or an ''attributed relational graph''. These graphs encode relationships as typed links between a pair of typed nodes. Indeed, semantic graphs are very similar to semantic networks used in AI. The node and link types are related through an ontology graph (also known as a schema). Furthermore, each node has a set of attributes associated with it (e.g., ''age'' may be an attribute of a node of type ''person''). Unfortunately, the selection of types and attributes for both nodes and links depends on human expertise and is somewhat subjective and even arbitrary. This subjectiveness introduces biases into any algorithm that operates on semantic graphs. Here, we raise some knowledge representation issues for semantic graphs and provide some possible solutions using recently developed ideas in the field of complex networks. In particular, we use the concept of transitivity to evaluate the relevance of individual links in the semantic graph for detecting relationships. We also propose new statistical measures for semantic graphs and illustrate these semantic measures on graphs constructed from movies and terrorism data.
A survey on application of representation theory to molecular vibration
Prakasa, Yohenry E-mail: ntan@math.itb.ac.id; Muchtadi-Alamsyah, Intan E-mail: ntan@math.itb.ac.id
2014-03-24
Representations Theory is used extensively in many of the physical sciences as every physical system has a symmetry group G. Various differential equations determine the vibration of a molecule, and the symmetry group of the molecule acts on the space of solutions of these equations. In this paper we use CH{sub 4} (methane) molecule, which has four hydrogen atoms at the corners of a regular tetrahedron, and a carbon atom at the center of the tetrahedron. The four hydrogen atoms in CH{sub 4} are permuted by the action of the symmetry group and this action fixes the carbon atom. At each of the 5 vertices, we assign three unit vectors, called the standard basis vectors in directions of the three edges which are joined to the vertex. The symmetry group G of the molecules permutes the 15 standard basis vectors, so we may regard Q{sup 15} as a GG By expressing Q{sup 15} as a direct sum of irreducible GG-modules, the problem of finding the normal modes of vibration is reduced to that of computing the eigenvectors of some small matrices.
Real-space representation of electron correlation in ?-conjugated systems
Wang, Jian E-mail: e.j.baerends@vu.nl; Baerends, Evert Jan E-mail: e.j.baerends@vu.nl
2015-05-28
?-electron conjugation and aromaticity are commonly associated with delocalization and especially high mobility of the ? electrons. We investigate if also the electron correlation (pair density) exhibits signatures of the special electronic structure of conjugated systems. To that end the shape and extent of the pair density and derived quantities (exchange-correlation hole, Coulomb hole, and conditional density) are investigated for the prototype systems ethylene, hexatriene, and benzene. The answer is that the effects of ? electron conjugation are hardly discernible in the real space representations of the electron correlation. We find the xc hole to be as localized (confined to atomic or diatomic regions) in conjugated systems as in small molecules. This result is relevant for density functional theory (DFT). The potential of the electron exchange-correlation hole is the largest part of v{sub xc}, the exchange-correlation Kohn-Sham potential. So the extent of the hole directly affects the orbital energies of both occupied and unoccupied Kohn-Sham orbitals and therefore has direct relevance for the excitation spectrum as calculated with time-dependent DFT calculations. The potential of the localized xc hole is comparatively more attractive than the actual hole left behind by an electron excited from a delocalized molecular orbital of a conjugated system.
Energy Science and Technology Software Center (OSTI)
2004-03-01
A package of classes for constructing and using distributed sparse and dense matrices, vectors and graphs, written in Java. Jpetra is intended to provide the foundation for basic matrix and vector operations for Java developers. Jpetra provides distributed memory operations via an abstract parallel machine interface. The most common implementation of this interface will be Java sockets.
2011-01-01
Check out this robotics breakthrough which allows robots to behave autonomously. For more information about INL research projects, visit http://www.facebook.com/idahonationallaboratory.
None
2013-05-28
Check out this robotics breakthrough which allows robots to behave autonomously. For more information about INL research projects, visit http://www.facebook.com/idahonationallaboratory.
Physics Integration KErnels (PIKE)
Energy Science and Technology Software Center (OSTI)
2014-07-31
Pike is a software library for coupling and solving multiphysics applications. It provides basic interfaces and utilities for performing code-to-code coupling. It provides simple black-box Picard iteration methods for solving the coupled system of equations including Jacobi and Gauss-Seidel solvers. Pike was developed originally to couple neutronics and thermal fluids codes to simulate a light water nuclear reactor for the Consortium for Simulation of Light-water Reactors (CASL) DOE Energy Innovation Hub. The Pike library containsmore » no physics and just provides interfaces and utilities for coupling codes. It will be released open source under a BSD license as part of the Trilinos solver framework (trilinos.org) which is also BSD. This code provides capabilities similar to other open source multiphysics coupling libraries such as LIME, AMP, and MOOSE.« less
Energy Science and Technology Software Center (OSTI)
2004-03-01
A package of classes for constructing and using distributed sparse and dense matrices, vectors and graphs. Templated on the scalar and ordinal types so that any valid floating-point type, as well as any valid integer type can be used with these classes. Other non-standard types, such as 3-by-3 matrices for the scalar type and mod-based integers for ordinal types, can also be used. Tpetra is intended to provide the foundation for basic matrix and vectormore » operations for the next generation of Trilinos preconditioners and solvers, It can be considered as the follow-on to Epetra. Tpetra provides distributed memory operations via an abstract parallel machine interface, The most common implementation of this interface will be MPI.« less
Complex-wide representation of material packaged in 3013 containers
Narlesky, Joshua E.; Peppers, Larry G.; Friday, Gary P.
2009-06-01
The DOE sites packaging plutonium oxide materials packaged according to Department of Energy 3013 Standard (DOE-STD-3013) are responsible for ensuring that the materials are represented by one or more samples in the Materials Identification and Surveillance (MIS) program. The sites categorized most of the materials into process groups, and the remaining materials were characterized, based on the prompt gamma analysis results. The sites issued documents to identify the relationships between the materials packaged in 3013 containers and representative materials in the MIS program. These Represented documents were then reviewed and concurred with by the MIS Working Group. However, these documents were developed uniquely at each site and were issued before completion of sample characterization, small-scale experiments, and prompt gamma analysis, which provided more detailed information about the chemical impurities and the behavior of the material in storage. Therefore, based on the most recent data, relationships between the materials packaged in 3013 containers and representative materials in the MIS program been revised. With the prompt gamma analysis completed for Hanford, Rocky Flats, and Savannah River Site 3013 containers, MIS items have been assigned to the 3013 containers for which representation is based on the prompt gamma analysis results. With the revised relationships and the prompt gamma analysis results, a Master Represented table has been compiled to document the linkages between each 3013 container packaged to date and its representative MIS items. This table provides an important link between the Integrated Surveillance Program database, which contains information about each 3013 container to the MIS items database, which contains the characterization, prompt gamma data, and storage behavior data from shelf-life experiments for the representative MIS items.
Baykara, N. A.; Guervit, Ercan; Demiralp, Metin
2012-12-10
In this work a study on finite dimensional matrix approximations to products of quantum mechanical operators is conducted. It is emphasized that the matrix representation of the product of two operators is equal to the product of the matrix representation of each of the operators when all the fluctuation terms are ignored. The calculation of the elements of the matrices corresponding to the matrix representation of various operators, based on three terms recursive relation is defined. Finally it is shown that the approximation quality depends on the choice of higher values of n, namely the dimension of Hilbert space.
Online Support Vector Regression with Varying Parameters for Time-Dependent Data
Omitaomu, Olufemi A; Jeong, Myong K; Badiru, Adedeji B
2011-01-01
Support vector regression (SVR) is a machine learning technique that continues to receive interest in several domains including manufacturing, engineering, and medicine. In order to extend its application to problems in which datasets arrive constantly and in which batch processing of the datasets is infeasible or expensive, an accurate online support vector regression (AOSVR) technique was proposed. The AOSVR technique efficiently updates a trained SVR function whenever a sample is added to or removed from the training set without retraining the entire training data. However, the AOSVR technique assumes that the new samples and the training samples are of the same characteristics; hence, the same value of SVR parameters is used for training and prediction. This assumption is not applicable to data samples that are inherently noisy and non-stationary such as sensor data. As a result, we propose Accurate On-line Support Vector Regression with Varying Parameters (AOSVR-VP) that uses varying SVR parameters rather than fixed SVR parameters, and hence accounts for the variability that may exist in the samples. To accomplish this objective, we also propose a generalized weight function to automatically update the weights of SVR parameters in on-line monitoring applications. The proposed function allows for lower and upper bounds for SVR parameters. We tested our proposed approach and compared results with the conventional AOSVR approach using two benchmark time series data and sensor data from nuclear power plant. The results show that using varying SVR parameters is more applicable to time dependent data.
Notes on power of normality tests of error terms in regression models
Střelec, Luboš
2015-03-10
Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.
Quantization maps, algebra representation, and non-commutative Fourier transform for Lie groups
Guedes, Carlos; Oriti, Daniele; Raasakka, Matti; LIPN, Institut Galile, Universit Paris-Nord, 99, av. Clement, 93430 Villetaneuse
2013-08-15
The phase space given by the cotangent bundle of a Lie group appears in the context of several models for physical systems. A representation for the quantum system in terms of non-commutative functions on the (dual) Lie algebra, and a generalized notion of (non-commutative) Fourier transform, different from standard harmonic analysis, has been recently developed, and found several applications, especially in the quantum gravity literature. We show that this algebra representation can be defined on the sole basis of a quantization map of the classical Poisson algebra, and identify the conditions for its existence. In particular, the corresponding non-commutative star-product carried by this representation is obtained directly from the quantization map via deformation quantization. We then clarify under which conditions a unitary intertwiner between such algebra representation and the usual group representation can be constructed giving rise to the non-commutative plane waves and consequently, the non-commutative Fourier transform. The compact groups U(1) and SU(2) are considered for different choices of quantization maps, such as the symmetric and the Duflo map, and we exhibit the corresponding star-products, algebra representations, and non-commutative plane waves.
ORISE-09-OEWH-0176 POISSON REGRESSION ANALYSIS OF ILLNESS AND INJURY SURVEILLANCE DATA
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
ORISE-09-OEWH-0176 POISSON REGRESSION ANALYSIS OF ILLNESS AND INJURY SURVEILLANCE DATA E. L. Frome J. P. Watkins E. D. Ellis Center for Epidemiologic Research Oak Ridge Institute for Science and Education, Oak Ridge, TN, USA C. H. Strader U. S. Department of Energy Date Published: December 2012 Prepared by Oak Ridge Institute for Science and Education P.O. Box 117 Oak Ridge, TN 37831-0117 managed by Oak Ridge Associated Universities for the U.S. DEPARTMENT OF ENERGY under contract
Christensen, N.C.; Emery, J.D.; Smith, M.L.
1985-04-29
A system converts from the boundary representation of an object to the constructive solid geometry representation thereof. The system converts the boundary representation of the object into elemental atomic geometrical units or I-bodies which are in the shape of stock primitives or regularized intersections of stock primitives. These elemental atomic geometrical units are then represented in symbolic form. The symbolic representations of the elemental atomic geometrical units are then assembled heuristically to form a constructive solid geometry representation of the object usable for manufacturing thereof. Artificial intelligence is used to determine the best constructive solid geometry representation from the boundary representation of the object. Heuristic criteria are adapted to the manufacturing environment for which the device is to be utilized. The surface finish, tolerance, and other information associated with each surface of the boundary representation of the object are mapped onto the constructive solid geometry representation of the object to produce an enhanced solid geometry representation, particularly useful for computer-aided manufacture of the object. 19 figs.
Christensen, Noel C.; Emery, James D.; Smith, Maurice L.
1988-04-05
A system converts from the boundary representation of an object to the constructive solid geometry representation thereof. The system converts the boundary representation of the object into elemental atomic geometrical units or I-bodies which are in the shape of stock primitives or regularized intersections of stock primitives. These elemental atomic geometrical units are then represented in symbolic form. The symbolic representations of the elemental atomic geometrical units are then assembled heuristically to form a constructive solid geometry representation of the object usable for manufacturing thereof. Artificial intelligence is used to determine the best constructive solid geometry representation from the boundary representation of the object. Heuristic criteria are adapted to the manufacturing environment for which the device is to be utilized. The surface finish, tolerance, and other information associated with each surface of the boundary representation of the object are mapped onto the constructive solid geometry representation of the object to produce an enhanced solid geometry representation, particularly useful for computer-aided manufacture of the object.
Induced representations of tensors and spinors of any rank in the Stueckelberg-Horwitz-Piron theory
Horwitz, Lawrence P.; Zeilig-Hess, Meir
2015-09-15
We show that a modification of Wigner’s induced representation for the description of a relativistic particle with spin can be used to construct spinors and tensors of arbitrary rank, with invariant decomposition over angular momentum. In particular, scalar and vector fields, as well as the representations of their transformations, are constructed. The method that is developed here admits the construction of wave packets and states of a many body relativistic system with definite total angular momentum. Furthermore, a Pauli-Lubanski operator is constructed on the orbit of the induced representation which provides a Casimir operator for the Poincaré group and which contains the physical intrinsic angular momentum of the particle covariantly.
Using Focused Regression for Accurate Time-Constrained Scaling of Scientific Applications
Barnes, B; Garren, J; Lowenthal, D; Reeves, J; de Supinski, B; Schulz, M; Rountree, B
2010-01-28
Many large-scale clusters now have hundreds of thousands of processors, and processor counts will be over one million within a few years. Computational scientists must scale their applications to exploit these new clusters. Time-constrained scaling, which is often used, tries to hold total execution time constant while increasing the problem size along with the processor count. However, complex interactions between parameters, the processor count, and execution time complicate determining the input parameters that achieve this goal. In this paper we develop a novel gray-box, focused median prediction errors are less than 13%. regression-based approach that assists the computational scientist with maintaining constant run time on increasing processor counts. Combining application-level information from a small set of training runs, our approach allows prediction of the input parameters that result in similar per-processor execution time at larger scales. Our experimental validation across seven applications showed that median prediction errors are less than 13%.
Harlim, John; Mahdi, Adam; Majda, Andrew J.
2014-01-15
A central issue in contemporary science is the development of nonlinear data driven statisticaldynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partial noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (eastwest) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model.
Matrix elements for type 1 unitary irreducible representations of the Lie superalgebra gl(m|n)
Gould, Mark D.; Isaac, Phillip S.; Werry, Jason L.
2014-01-15
Using our recent results on eigenvalues of invariants associated to the Lie superalgebra gl(m|n), we use characteristic identities to derive explicit matrix element formulae for all gl(m|n) generators, particularly non-elementary generators, on finite dimensional type 1 unitary irreducible representations. We compare our results with existing works that deal with only subsets of the class of type 1 unitary representations, all of which only present explicit matrix elements for elementary generators. Our work therefore provides an important extension to existing methods, and thus highlights the strength of our techniques which exploit the characteristic identities.
Real-space quadrature: A convenient, efficient representation for multipole expansions
Rogers, David M.
2015-02-21
Multipoles are central to the theory and modeling of polarizable and nonpolarizable molecular electrostatics. This has made a representation in terms of point charges a highly sought after goal, since rotation of multipoles is a bottleneck in molecular dynamics implementations. All known point charge representations are orders of magnitude less efficient than spherical harmonics due to either using too many fixed charge locations or due to nonlinear fitting of fewer charge locations. We present the first complete solution to this problem—completely replacing spherical harmonic basis functions by a dramatically simpler set of weights associated to fixed, discrete points on a sphere. This representation is shown to be space optimal. It reduces the spherical harmonic decomposition of Poisson’s operator to pairwise summations over the point set. As a corollary, we also shows exact quadrature-based formulas for contraction over trace-free supersymmetric 3D tensors. Moreover, multiplication of spherical harmonic basis functions translates to a direct product in this representation.
Mandel, Kaisey S.; Kirshner, Robert P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Foley, Ryan J., E-mail: kmandel@cfa.harvard.edu [Astronomy Department, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801 (United States)
2014-12-20
We investigate the statistical dependence of the peak intrinsic colors of Type Ia supernovae (SNe Ia) on their expansion velocities at maximum light, measured from the Si II ?6355 spectral feature. We construct a new hierarchical Bayesian regression model, accounting for the random effects of intrinsic scatter, measurement error, and reddening by host galaxy dust, and implement a Gibbs sampler and deviance information criteria to estimate the correlation. The method is applied to the apparent colors from BVRI light curves and Si II velocity data for 79 nearby SNe Ia. The apparent color distributions of high-velocity (HV) and normal velocity (NV) supernovae exhibit significant discrepancies for B V and B R, but not other colors. Hence, they are likely due to intrinsic color differences originating in the B band, rather than dust reddening. The mean intrinsic B V and B R color differences between HV and NV groups are 0.06 0.02 and 0.09 0.02 mag, respectively. A linear model finds significant slopes of 0.021 0.006 and 0.030 0.009 mag (10{sup 3} km s{sup 1}){sup 1} for intrinsic B V and B R colors versus velocity, respectively. Because the ejecta velocity distribution is skewed toward high velocities, these effects imply non-Gaussian intrinsic color distributions with skewness up to +0.3. Accounting for the intrinsic-color-velocity correlation results in corrections to A{sub V} extinction estimates as large as 0.12 mag for HV SNe Ia and +0.06 mag for NV events. Velocity measurements from SN Ia spectra have the potential to diminish systematic errors from the confounding of intrinsic colors and dust reddening affecting supernova distances.
Integration of MHD load models with circuit representations the Z generator.
Jennings, Christopher A.; Ampleford, David J.; Jones, Brent Manley; McBride, Ryan D.; Bailey, James E.; Jones, Michael C.; Gomez, Matthew Robert.; Cuneo, Michael Edward; Nakhleh, Charles; Stygar, William A.; Savage, Mark Edward; Wagoner, Timothy C.; Moore, James K.
2013-03-01
MHD models of imploding loads fielded on the Z accelerator are typically driven by reduced or simplified circuit representations of the generator. The performance of many of the imploding loads is critically dependent on the current and power delivered to them, so may be strongly influenced by the generators response to their implosion. Current losses diagnosed in the transmission lines approaching the load are further known to limit the energy delivery, while exhibiting some load dependence. Through comparing the convolute performance of a wide variety of short pulse Z loads we parameterize a convolute loss resistance applicable between different experiments. We incorporate this, and other current loss terms into a transmission line representation of the Z vacuum section. We then apply this model to study the current delivery to a wide variety of wire array and MagLif style liner loads.
Wigner functions for noncommutative quantum mechanics: A group representation based construction
Chowdhury, S. Hasibul Hassan; Ali, S. Twareque
2015-12-15
This paper is devoted to the construction and analysis of the Wigner functions for noncommutative quantum mechanics, their marginal distributions, and star-products, following a technique developed earlier, viz, using the unitary irreducible representations of the group G{sub NC}, which is the three fold central extension of the Abelian group of ℝ{sup 4}. These representations have been exhaustively studied in earlier papers. The group G{sub NC} is identified with the kinematical symmetry group of noncommutative quantum mechanics of a system with two degrees of freedom. The Wigner functions studied here reflect different levels of non-commutativity—both the operators of position and those of momentum not commuting, the position operators not commuting and finally, the case of standard quantum mechanics, obeying the canonical commutation relations only.
Unitary irreducible representations of SL(2,C) in discrete and continuous SU(1,1) bases
Conrady, Florian; Hnybida, Jeff
2011-01-15
We derive the matrix elements of generators of unitary irreducible representations of SL(2,C) with respect to basis states arising from a decomposition into irreducible representations of SU(1,1). This is done with regard to a discrete basis diagonalized by J{sup 3} and a continuous basis diagonalized by K{sup 1}, and for both the discrete and continuous series of SU(1,1). For completeness, we also treat the more conventional SU(2) decomposition as a fifth case. The derivation proceeds in a functional/differential framework and exploits the fact that state functions and differential operators have a similar structure in all five cases. The states are defined explicitly and related to SU(1,1) and SU(2) matrix elements.
Cohen, Scott M.
2014-06-15
We give a sufficient condition that an operator sum representation of a separable quantum channel in terms of product operators is the unique product representation for that channel, and then provide examples of such channels for any number of parties. This result has implications for efforts to determine whether or not a given separable channel can be exactly implemented by local operations and classical communication. By the Choi-Jamiolkowski isomorphism, it also translates to a condition for the uniqueness of product state ensembles representing a given quantum state. These ideas follow from considerations concerning whether or not a subspace spanned by a given set of product operators contains at least one additional product operator.
Rapid production of optimal-quality reduced-resolution representations of very large databases
Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.
2001-01-01
View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
in section 501(c)(3) of the Internal Revenue Code of 1954 26 U.S.C. Section 501(c)(3). ... agree that it will promote the manufacture within the U.S. of products resulting ...
Madduri, Kamesh; Bader, David A.
2009-02-15
Graph-theoretic abstractions are extensively used to analyze massive data sets. Temporal data streams from socioeconomic interactions, social networking web sites, communication traffic, and scientific computing can be intuitively modeled as graphs. We present the first study of novel high-performance combinatorial techniques for analyzing large-scale information networks, encapsulating dynamic interaction data in the order of billions of entities. We present new data structures to represent dynamic interaction networks, and discuss algorithms for processing parallel insertions and deletions of edges in small-world networks. With these new approaches, we achieve an average performance rate of 25 million structural updates per second and a parallel speedup of nearly28 on a 64-way Sun UltraSPARC T2 multicore processor, for insertions and deletions to a small-world network of 33.5 million vertices and 268 million edges. We also design parallel implementations of fundamental dynamic graph kernels related to connectivity and centrality queries. Our implementations are freely distributed as part of the open-source SNAP (Small-world Network Analysis and Partitioning) complex network analysis framework.
Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)
2006-10-01
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.
Impact of aerosol size representation on modeling aerosol-cloud interactions
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Zhang, Y.; Easter, R. C.; Ghan, S. J.; Abdul-Razzak, H.
2002-11-07
In this study, we use a 1-D version of a climate-aerosol-chemistry model with both modal and sectional aerosol size representations to evaluate the impact of aerosol size representation on modeling aerosol-cloud interactions in shallow stratiform clouds observed during the 2nd Aerosol Characterization Experiment. Both the modal (with prognostic aerosol number and mass or prognostic aerosol number, surface area and mass, referred to as the Modal-NM and Modal-NSM) and the sectional approaches (with 12 and 36 sections) predict total number and mass for interstitial and activated particles that are generally within several percent of references from a high resolution 108-section approach.more » The modal approach with prognostic aerosol mass but diagnostic number (referred to as the Modal-M) cannot accurately predict the total particle number and surface areas, with deviations from the references ranging from 7-161%. The particle size distributions are sensitive to size representations, with normalized absolute differences of up to 12% and 37% for the 36- and 12-section approaches, and 30%, 39%, and 179% for the Modal-NSM, Modal-NM, and Modal-M, respectively. For the Modal-NSM and Modal-NM, differences from the references are primarily due to the inherent assumptions and limitations of the modal approach. In particular, they cannot resolve the abrupt size transition between the interstitial and activated aerosol fractions. For the 12- and 36-section approaches, differences are largely due to limitations of the parameterized activation for non-log-normal size distributions, plus the coarse resolution for the 12-section case. Differences are larger both with higher aerosol (i.e., less complete activation) and higher SO2 concentrations (i.e., greater modification of the initial aerosol distribution).« less
Augustine, C.
2011-10-01
The U.S. Department of Energy (DOE) Geothermal Technologies Program (GTP) tasked the National Renewable Energy Laboratory (NREL) with conducting the annual geothermal supply curve update. This report documents the approach taken to identify geothermal resources, determine the electrical producing potential of these resources, and estimate the levelized cost of electricity (LCOE), capital costs, and operating and maintenance costs from these geothermal resources at present and future timeframes under various GTP funding levels. Finally, this report discusses the resulting supply curve representation and how improvements can be made to future supply curve updates.
Li, Dongsheng; Khaleel, Mohammad A.; Sun, Xin; Garmestani, Hamid
2010-03-01
Statistical correlation function, including two-point function, is one of the popular methods to digitize microstructure quantitatively. This paper investigated how to represent statistical correlations using layered fast spherical harmonics expansion. A set of spherical harmonics coefficients may be used to represent the corresponding microstructures. It is applied to represent carbon nanotube composite microstructures to demonstrate how efficiently and precisely the harmonics coefficients will characterize the microstructure. This microstructure representation methodology will dramatically improve the computational efficiencies for future works in microstructure reconstruction and property prediction.
Approach of spherical harmonics to the representation of the deformed su(1,1) algebra
Fakhri, H.; Ghaneh, T.
2008-11-15
The m-shifting generators of su(2) algebra together with a pair of l-shifting ladder symmetry operators have been used in the space of all spherical harmonics Y{sub l}{sup m}({theta},{phi}) in order to introduce a new set of operators, expressing the transitions between them. It is shown that the space of spherical harmonics whose l+2m or l-2m is given presents negative and positive irreducible representations of a deformed su(1,1) algebra, respectively. These internal symmetries also suggest new algebraic methods to construct the spherical harmonics in the framework of the spectrum-generating algebras.
Oliveira, Joseph S.; Jones-Oliveira, Janet B.; Bailey, Colin G.; Gull, Dean W.
2008-07-01
One embodiment of the present invention includes a computer operable to represent a physical system with a graphical data structure corresponding to a matroid. The graphical data structure corresponds to a number of vertices and a number of edges that each correspond to two of the vertices. The computer is further operable to define a closed pathway arrangement with the graphical data structure and identify each different one of a number of fundamental cycles by evaluating a different respective one of the edges with a spanning tree representation. The fundamental cycles each include three or more of the vertices.
Light-front representation of chiral dynamics in peripheral transverse densities
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Granados, Carlos G.; Weiss, Christian
2015-07-31
The nucleon's electromagnetic form factors are expressed in terms of the transverse densities of charge and magnetization at fixed light-front time. At peripheral transverse distances b = O(M_pi^{-1}) the densities are governed by chiral dynamics and can be calculated model-independently using chiral effective field theory (EFT). We represent the leading-order chiral EFT results for the peripheral transverse densities as overlap integrals of chiral light-front wave functions, describing the transition of the initial nucleon to soft pion-nucleon intermediate states and back. The new representation (a) explains the parametric order of the peripheral transverse densities; (b) establishes an inequality between the spin-independentmore » and -dependent densities; (c) exposes the role of pion orbital angular momentum in chiral dynamics; (d) reveals a large left-right asymmetry of the current in a transversely polarized nucleon and suggests a simple interpretation. The light-front representation enables a first-quantized, quantum-mechanical view of chiral dynamics that is fully relativistic and exactly equivalent to the second-quantized, field-theoretical formulation. It relates the charge and magnetization densities measured in low-energy elastic scattering to the generalized parton distributions probed in peripheral high-energy scattering processes. The method can be applied to nucleon form factors of other operators, e.g. the energy-momentum tensor.« less
Light-front representation of chiral dynamics in peripheral transverse densities
Granados, Carlos G.; Weiss, Christian
2015-07-31
The nucleon's electromagnetic form factors are expressed in terms of the transverse densities of charge and magnetization at fixed light-front time. At peripheral transverse distances b = O(M_pi^{-1}) the densities are governed by chiral dynamics and can be calculated model-independently using chiral effective field theory (EFT). We represent the leading-order chiral EFT results for the peripheral transverse densities as overlap integrals of chiral light-front wave functions, describing the transition of the initial nucleon to soft pion-nucleon intermediate states and back. The new representation (a) explains the parametric order of the peripheral transverse densities; (b) establishes an inequality between the spin-independent and -dependent densities; (c) exposes the role of pion orbital angular momentum in chiral dynamics; (d) reveals a large left-right asymmetry of the current in a transversely polarized nucleon and suggests a simple interpretation. The light-front representation enables a first-quantized, quantum-mechanical view of chiral dynamics that is fully relativistic and exactly equivalent to the second-quantized, field-theoretical formulation. It relates the charge and magnetization densities measured in low-energy elastic scattering to the generalized parton distributions probed in peripheral high-energy scattering processes. The method can be applied to nucleon form factors of other operators, e.g. the energy-momentum tensor.
Bryan, Frank; Dennis, John; MacCready, Parker; Whitney, Michael
2015-11-20
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation. The main computational objectives were: 1. To develop computationally efficient, but physically based, parameterizations of estuary and continental shelf mixing processes for use in an Earth System Model (CESM). 2. To develop a two-way nested regional modeling framework in order to dynamically downscale the climate response of particular coastal ocean regions and to upscale the impact of the regional coastal processes to the global climate in an Earth System Model (CESM). 3. To develop computational infrastructure to enhance the efficiency of data transfer between specific sources and destinations, i.e., a point-to-point communication capability, (used in objective 1) within POP, the ocean component of CESM.
Kowalski, Karol; Bhaskaran-Nair, Kiran; Shelton, William A.
2014-09-07
In this paper we discuss a new formalism for producing an analytic coupled-cluster (CC) Greens function that renders a highly scalable computational accurate method for producing an analytic coupled-cluster Greens function for an N-electron system by shifting the poles of similarity transformed Hamiltonians represented in N?1 and N +1 electron Hilbert spaces. Simple criteria are derived for the states in N ?1 and N + 1 electron spaces that are then corrected in the spectral resolution of the cor- responding matrix representations of the similarity transformed Hamiltonian. The accurate description of excited state processes within a Greens function formalism would be of significant importance to a number of scientific communities ranging from physics and chemistry to engineering and the biological sciences. This is because the Greens function methodology provides a direct path for not only calculating prop- erties whose underlying origins come from coupled many-body interactions but it also provides a straightforward path for calculating electron transport, response and correlation functions that allows for a direct link with experiment. As a special case of this general formulation, we discuss the application of this technique for Greens function defined by the CCSD (CC with singles and doubles) representation of the ground-state wave function.
Zhang, P; Hu, J; Tyagi, N; Mageras, G; Lee, N; Hunt, M
2014-06-01
Purpose: To develop a robust planning paradigm which incorporates a tumor regression model into the optimization process to ensure tumor coverage in head and neck radiotherapy. Methods: Simulation and weekly MR images were acquired for a group of head and neck patients to characterize tumor regression during radiotherapy. For each patient, the tumor and parotid glands were segmented on the MR images and the weekly changes were formulated with an affine transformation, where morphological shrinkage and positional changes are modeled by a scaling factor, and centroid shifts, respectively. The tumor and parotid contours were also transferred to the planning CT via rigid registration. To perform the robust planning, weekly predicted PTV and parotid structures were created by transforming the corresponding simulation structures according to the weekly affine transformation matrix averaged over patients other than him/herself. Next, robust PTV and parotid structures were generated as the union of the simulation and weekly prediction contours. In the subsequent robust optimization process, attainment of the clinical dose objectives was required for the robust PTV and parotids, as well as other organs at risk (OAR). The resulting robust plans were evaluated by looking at the weekly and total accumulated dose to the actual weekly PTV and parotid structures. The robust plan was compared with the original plan based on the planning CT to determine its potential clinical benefit. Results: For four patients, the average weekly change to tumor volume and position was ?4% and 1.2 mm laterally-posteriorly. Due to these temporal changes, the robust plans resulted in an accumulated PTV D95 that was, on average, 2.7 Gy higher than the plan created from the planning CT. OAR doses were similar. Conclusion: Integration of a tumor regression model into target delineation and plan robust optimization is feasible and may yield improved tumor coverage. Part of this research is supported by
Grider, Gary A.; Poole, Stephen W.
2015-09-01
Collective buffering and data pattern solutions are provided for storage, retrieval, and/or analysis of data in a collective parallel processing environment. For example, a method can be provided for data storage in a collective parallel processing environment. The method comprises receiving data to be written for a plurality of collective processes within a collective parallel processing environment, extracting a data pattern for the data to be written for the plurality of collective processes, generating a representation describing the data pattern, and saving the data and the representation.
A diabatic representation of the two lowest electronic states of Li{sub 3}
Ghassemi, Elham Nour; Larson, Jonas; Institut für Theoretische Physik, Universität zu Köln, Köln De-50937 ; Larson, Åsa
2014-04-21
Using the Multi-Reference Configuration Interaction method, the adiabatic potential energy surfaces of Li{sub 3} are computed. The two lowest electronic states are bound and exhibit a conical intersection. By fitting the calculated potential energy surfaces to the cubic E ⊗ ε Jahn-Teller model we extract the effective Jahn-Teller parameters corresponding to Li{sub 3}. These are used to set up the transformation matrix which transforms from the adiabatic to a diabatic representation. This diabatization method gives a Hamiltonian for Li{sub 3} which is free from singular non-adiabatic couplings and should be accurate for large internuclear distances, and it thereby allows for bound dynamics in the vicinity of the conical intersection to be explored.
Scale and the representation of human agency in the modeling of agroecosystems
Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.; Absar, Syeda Mariya; Nair, Sujithkumar Surendran; Parish, Esther S.
2015-07-17
Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capture different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.
Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang
2014-04-15
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT
Sharon Falcone Miller; Bruce G. Miller
2007-12-15
This paper compares the emissions factors for a suite of liquid biofuels (three animal fats, waste restaurant grease, pressed soybean oil, and a biodiesel produced from soybean oil) and four fossil fuels (i.e., natural gas, No. 2 fuel oil, No. 6 fuel oil, and pulverized coal) in Penn State's commercial water-tube boiler to assess their viability as fuels for green heat applications. The data were broken into two subsets, i.e., fossil fuels and biofuels. The regression model for the liquid biofuels (as a subset) did not perform well for all of the gases. In addition, the coefficient in the models showed the EPA method underestimating CO and NOx emissions. No relation could be studied for SO{sub 2} for the liquid biofuels as they contain no sulfur; however, the model showed a good relationship between the two methods for SO{sub 2} in the fossil fuels. AP-42 emissions factors for the fossil fuels were also compared to the mass balance emissions factors and EPA CFR Title 40 emissions factors. Overall, the AP-42 emissions factors for the fossil fuels did not compare well with the mass balance emissions factors or the EPA CFR Title 40 emissions factors. Regression analysis of the AP-42, EPA, and mass balance emissions factors for the fossil fuels showed a significant relationship only for CO{sub 2} and SO{sub 2}. However, the regression models underestimate the SO{sub 2} emissions by 33%. These tests illustrate the importance in performing material balances around boilers to obtain the most accurate emissions levels, especially when dealing with biofuels. The EPA emissions factors were very good at predicting the mass balance emissions factors for the fossil fuels and to a lesser degree the biofuels. While the AP-42 emissions factors and EPA CFR Title 40 emissions factors are easier to perform, especially in large, full-scale systems, this study illustrated the shortcomings of estimation techniques. 23 refs., 3 figs., 8 tabs.
Bou, Gwenal; Fabrycky, Daniel C.
2014-07-10
The non-resonant secular dynamics of compact planetary systems are modeled by a perturbing function that is usually expanded in eccentricity and absolute inclination with respect to the invariant plane. Here, the expressions are given in a vectorial form which naturally leads to an expansion in eccentricity and mutual inclination. The two approaches are equivalent in most cases, but the vectorial one is specially designed for those cases where an entire quasi-coplanar system tilts to a large degree. Moreover, the vectorial expressions of the Hamiltonian and of the equations of motion are slightly simpler than those given in terms of the usual elliptical elements. We also provide the secular perturbing function in vectorial form expanded in semi-major axis ratio allowing for arbitrary eccentricities and inclinations. The interaction between the equatorial bulge of a central star and its planets is also provided, as is the relativistic periapse precession of any planet induced by the central star. We illustrate the use of this representation to follow the secular oscillations of the terrestrial planets of the solar system and for Kozai cycles which may take place in exoplanetary systems.
A new subgrid-scale representation of hydrometeor fields using a multivariate PDF
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Griffin, Brian M.; Larson, Vincent E.
2016-06-03
The subgrid-scale representation of hydrometeor fields is important for calculating microphysical process rates. In order to represent subgrid-scale variability, the Cloud Layers Unified By Binormals (CLUBB) parameterization uses a multivariate probability density function (PDF). In addition to vertical velocity, temperature, and moisture fields, the PDF includes hydrometeor fields. Previously, hydrometeor fields were assumed to follow a multivariate single lognormal distribution. Now, in order to better represent the distribution of hydrometeors, two new multivariate PDFs are formulated and introduced.The new PDFs represent hydrometeors using either a delta-lognormal or a delta-double-lognormal shape. The two new PDF distributions, plus the previous single lognormalmore » shape, are compared to histograms of data taken from large-eddy simulations (LESs) of a precipitating cumulus case, a drizzling stratocumulus case, and a deep convective case. Finally, the warm microphysical process rates produced by the different hydrometeor PDFs are compared to the same process rates produced by the LES.« less
Scale and the representation of human agency in the modeling of agroecosystems
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.; Absar, Syeda Mariya; Nair, Sujithkumar Surendran; Parish, Esther S.
2015-07-17
Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less
Formulating a simplified equivalent representation of distribution circuits for PV impact studies.
Reno, Matthew J.; Broderick, Robert Joseph; Grijalva, Santiago
2013-04-01
With an increasing number of Distributed Generation (DG) being connected on the distribution system, a method for simplifying the complexity of the distribution system to an equivalent representation of the feeder is advantageous for streamlining the interconnection study process. The general characteristics of the system can be retained while reducing the modeling effort required. This report presents a method of simplifying feeders to only specified buses-of-interest. These buses-of-interest can be potential PV interconnection locations or buses where engineers want to verify a certain power quality. The equations and methodology are presented with mathematical proofs of the equivalence of the circuit reduction method. An example 15-bus feeder is shown with the parameters and intermediate example reduction steps to simplify the circuit to 4 buses. The reduced feeder is simulated using PowerWorld Simulator to validate that those buses operate with the same characteristics as the original circuit. Validation of the method is also performed for snapshot and time-series simulations with variable load and solar energy output data to validate the equivalent performance of the reduced circuit with the interconnection of PV.
Jahandideh, Sepideh Jahandideh, Samad; Asadabadi, Ebrahim Barzegari; Askarian, Mehrdad; Movahedi, Mohammad Mehdi; Hosseini, Somayyeh; Jahandideh, Mina
2009-11-15
Prediction of the amount of hospital waste production will be helpful in the storage, transportation and disposal of hospital waste management. Based on this fact, two predictor models including artificial neural networks (ANNs) and multiple linear regression (MLR) were applied to predict the rate of medical waste generation totally and in different types of sharp, infectious and general. In this study, a 5-fold cross-validation procedure on a database containing total of 50 hospitals of Fars province (Iran) were used to verify the performance of the models. Three performance measures including MAR, RMSE and R{sup 2} were used to evaluate performance of models. The MLR as a conventional model obtained poor prediction performance measure values. However, MLR distinguished hospital capacity and bed occupancy as more significant parameters. On the other hand, ANNs as a more powerful model, which has not been introduced in predicting rate of medical waste generation, showed high performance measure values, especially 0.99 value of R{sup 2} confirming the good fit of the data. Such satisfactory results could be attributed to the non-linear nature of ANNs in problem solving which provides the opportunity for relating independent variables to dependent ones non-linearly. In conclusion, the obtained results showed that our ANN-based model approach is very promising and may play a useful role in developing a better cost-effective strategy for waste management in future.
TITLE AUTHORS SUBJECT SUBJECT RELATED DESCRIPTION PUBLISHER AVAILABILI...
Office of Scientific and Technical Information (OSTI)
QUANTIZATION THEORETICAL DATA The geometric picture of the star product based on its Fourier representation kernel is utilized in the evaluation of chains of star products and...
Division, Argonne National Laboratory, Argonne, Illinois 60439...
Office of Scientific and Technical Information (OSTI)
QUANTIZATION; THEORETICAL DATA The geometric picture of the star product based on its Fourier representation kernel is utilized in the evaluation of chains of star products and...
"Title","Creator/Author","Publication Date","OSTI Identifier...
Office of Scientific and Technical Information (OSTI)
THEORETICAL DATA",,"The geometric picture of the star product based on its Fourier representation kernel is utilized in the evaluation of chains of star products and...
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Benioff, Paul
2009-01-01
Tmore » his work is based on the field of reference frames based on quantum representations of real and complex numbers described in other work. Here frame domains are expanded to include space and time lattices. Strings of qukits are described as hybrid systems as they are both mathematical and physical systems. As mathematical systems they represent numbers. As physical systems in each frame the strings have a discrete Schrodinger dynamics on the lattices.he frame field has an iterative structure such that the contents of a stage j frame have images in a stage j - 1 (parent) frame. A discussion of parent frame images includes the proposal that points of stage j frame lattices have images as hybrid systems in parent frames.he resulting association of energy with images of lattice point locations, as hybrid systems states, is discussed. Representations and images of other physical systems in the different frames are also described.« less
Zambrano, Eduardo; ulc, Miroslav; Van?ek, Ji?
2013-08-07
Time-resolved electronic spectra can be obtained as the Fourier transform of a special type of time correlation function known as fidelity amplitude, which, in turn, can be evaluated approximately and efficiently with the dephasing representation. Here we improve both the accuracy of this approximationwith an amplitude correction derived from the phase-space propagatorand its efficiencywith an improved cellular scheme employing inverse Weierstrass transform and optimal scaling of the cell size. We demonstrate the advantages of the new methodology by computing dispersed time-resolved stimulated emission spectra in the harmonic potential, pyrazine, and the NCO molecule. In contrast, we show that in strongly chaotic systems such as the quartic oscillator the original dephasing representation is more appropriate than either the cellular or prefactor-corrected methods.
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
THMC Modeling of EGS Reservoirs - Continuum through Discontinuum Representations: Capturing Reservoir Stimulation, Evolution and Induced Seismicity Derek Elsworth Pennsylvania State University Chemistry, Reservoir and Integrated Models Project Officer: Lauren Boyd Total Project Funding: $1.11M + $0.5M = $1.61M April 23, 2013 This presentation does not contain any proprietary confidential, or otherwise restricted information. 2 | US DOE Geothermal Office eere.energy.gov Challenges * Prospecting
Next Generation Models for Storage and Representation of Microbial Biological Annotation
Quest, Daniel J; Land, Miriam L; Brettin, Thomas S; Cottingham, Robert W
2010-01-01
Background Traditional genome annotation systems were developed in a very different computing era, one where the World Wide Web was just emerging. Consequently, these systems are built as centralized black boxes focused on generating high quality annotation submissions to GenBank/EMBL supported by expert manual curation. The exponential growth of sequence data drives a growing need for increasingly higher quality and automatically generated annotation. Typical annotation pipelines utilize traditional database technologies, clustered computing resources, Perl, C, and UNIX file systems to process raw sequence data, identify genes, and predict and categorize gene function. These technologies tightly couple the annotation software system to hardware and third party software (e.g. relational database systems and schemas). This makes annotation systems hard to reproduce, inflexible to modification over time, difficult to assess, difficult to partition across multiple geographic sites, and difficult to understand for those who are not domain experts. These systems are not readily open to scrutiny and therefore not scientifically tractable. The advent of Semantic Web standards such as Resource Description Framework (RDF) and OWL Web Ontology Language (OWL) enables us to construct systems that address these challenges in a new comprehensive way. Results Here, we develop a framework for linking traditional data to OWL-based ontologies in genome annotation. We show how data standards can decouple hardware and third party software tools from annotation pipelines, thereby making annotation pipelines easier to reproduce and assess. An illustrative example shows how TURTLE (Terse RDF Triple Language) can be used as a human readable, but also semantically-aware, equivalent to GenBank/EMBL files. Conclusions The power of this approach lies in its ability to assemble annotation data from multiple databases across multiple locations into a representation that is understandable to
Response and representation of ductile damage under varying shock loading conditions in tantalum
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bronkhorst, C. A.; Gray, III, G. T.; Addessio, F. L.; Livescu, V.; Bourne, N. K.; MacDonald, S. A.; Withers, P. J.
2016-02-25
The response of polycrystalline metals, which possess adequate mechanisms for plastic deformation under extreme loading conditions, is often accompanied by the formation of pores within the structure of the material. This large deformation process is broadly identified as progressive with nucleation, growth, coalescence, and failure the physical path taken over very short periods of time. These are well known to be complex processes strongly influenced by microstructure, loading path, and the loading profile, which remains a significant challenge to represent and predict numerically. In the current study, the influence of loading path on the damage evolution in high-purity tantalum ismore » presented. Tantalum samples were shock loaded to three different peak shock stresses using both symmetric impact, and two different composite flyer plate configurations such that upon unloading the three samples displayed nearly identical “pull-back” signals as measured via rear-surface velocimetry. While the “pull-back” signals observed were found to be similar in magnitude, the sample loaded to the highest peak stress nucleated a connected field of ductile fracture which resulted in complete separation, while the two lower peak stresses resulted in incipient damage. The damage evolution in the “soft” recovered tantalum samples was quantified using optical metallography, electron-back-scatter diffraction, and tomography. These experiments are examined numerically through the use of a model for shock-induced porosity evolution during damage. The model is shown to describe the response of the tantalum reasonably well under strongly loaded conditions but less well in the nucleation dominated regime. As a result, numerical results are also presented as a function of computational mesh density and discussed in the context of improved representation of the influence of material structure upon macro-scale models of ductile damage.« less
De Sapio, Vincent
2010-09-01
The analysis of spacecraft kinematics and dynamics requires an efficient scheme for spatial representation. While the representation of displacement in three dimensional Euclidean space is straightforward, orientation in three dimensions poses particular challenges. The unit quaternion provides an approach that mitigates many of the problems intrinsic in other representation approaches, including the ill-conditioning that arises from computing many successive rotations. This report focuses on the computational utility of unit quaternions and their application to the reconstruction of re-entry vehicle (RV) motion history from sensor data. To this end they will be used in conjunction with other kinematic and data processing techniques. We will present a numerical implementation for the reconstruction of RV motion solely from gyroscope and accelerometer data. This will make use of unit quaternions due to their numerical efficacy in dealing with the composition of many incremental rotations over a time series. In addition to signal processing and data conditioning procedures, algorithms for numerical quaternion-based integration of gyroscope data will be addressed, as well as accelerometer triangulation and integration to yield RV trajectory. Actual processed flight data will be presented to demonstrate the implementation of these methods.
Niyogi, Devdutta S.
2013-06-07
The CLASIC experiment was conducted over the US southern great plains (SGP) in June 2007 with an objective to lead an enhanced understanding of the cumulus convection particularly as it relates to land surface conditions. This project was design to help assist with understanding the overall improvement of land atmosphere convection initiation representation of which is important for global and regional models. The study helped address one of the critical documented deficiency in the models central to the ARM objectives for cumulus convection initiation and particularly under summer time conditions. This project was guided by the scientific question building on the CLASIC theme questions: What is the effect of improved land surface representation on the ability of coupled models to simulate cumulus and convection initiation? The focus was on the US Southern Great Plains region. Since the CLASIC period was anomalously wet the strategy has been to use other periods and domains to develop the comparative assessment for the CLASIC data period, and to understand the mechanisms of the anomalous wet conditions on the tropical systems and convection over land. The data periods include the IHOP 2002 field experiment that was over roughly same domain as the CLASIC in the SGP, and some of the DOE funded Ameriflux datasets.
Tao, Liang; McCurdy, C.W.; Rescigno, T.N.
2008-11-25
We show how to combine finite elements and the discrete variable representation in prolate spheroidal coordinates to develop a grid-based approach for quantum mechanical studies involving diatomic molecular targets. Prolate spheroidal coordinates are a natural choice for diatomic systems and have been used previously in a variety of bound-state applications. The use of exterior complex scaling in the present implementation allows for a transparently simple way of enforcing Coulomb boundary conditions and therefore straightforward application to electronic continuum problems. Illustrative examples involving the bound and continuum states of H2+, as well as the calculation of photoionization cross sections, show that the speed and accuracy of the present approach offer distinct advantages over methods based on single-center expansions.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Cover: CNW is Sandia's operating system for the Cray Red Storm supercomputer. Photo by ... on Sandia National Laboratories' Cray Red Storm computer in May 2008. inventorS or ...
Micro Kernel Benchmark for Evaluating Computer Performance
Energy Science and Technology Software Center (OSTI)
2007-04-06
Crystal_mk is a micro benchmark that LLNL will use to evaluate vendor's software(e.g. compiler) and hardware(e.g. processor speed, memory design).
Robotics - Intelligence Kernel - Energy Innovation Portal
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
on the defined contour path thus reducing the need for continuous attention by the operator. Benefits - Reduces overlap andor skipping, - Increases safety, efficiency, accuracy, - ...
Perturbation kernels for generalized seismological data functionals...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Authors: Chen, Po., Jordan, T.H., Lee, E. In seismic waveform analysis and inversion, data ... The generalized seismological data functionals (GSDF) of Gee & Jordan quantify waveform ...
Ceccato, Alessandro; Frezzato, Diego; Nicolini, Paolo
2015-12-14
In this work, we deal with general reactive systems involving N species and M elementary reactions under applicability of the mass-action law. Starting from the dynamic variables introduced in two previous works [P. Nicolini and D. Frezzato, J. Chem. Phys. 138(23), 234101 (2013); 138(23), 234102 (2013)], we turn to a new representation in which the system state is specified in a (N × M){sup 2}-dimensional space by a point whose coordinates have physical dimension of inverse-of-time. By adopting hyper-spherical coordinates (a set of dimensionless “angular” variables and a single “radial” one with physical dimension of inverse-of-time) and by examining the properties of their evolution law both formally and numerically on model kinetic schemes, we show that the system evolves towards the equilibrium as being attracted by a sequence of fixed subspaces (one at a time) each associated with a compact domain of the concentration space. Thus, we point out that also for general non-linear kinetics there exist fixed “objects” on the global scale, although they are conceived in such an abstract and extended space. Moreover, we propose a link between the persistence of the belonging of a trajectory to such subspaces and the closeness to the slow manifold which would be perceived by looking at the bundling of the trajectories in the concentration space.
Huang, Hsin-Yuan; Hall, Alex
2013-07-24
Stratocumulus and shallow cumulus clouds in subtropical oceanic regions (e.g., Southeast Pacific) cover thousands of square kilometers and play a key role in regulating global climate (e.g., Klein and Hartmann, 1993). Numerical modeling is an essential tool to study these clouds in regional and global systems, but the current generation of climate and weather models has difficulties in representing them in a realistic way (e.g., Siebesma et al., 2004; Stevens et al., 2007; Teixeira et al., 2011). While numerical models resolve the large-scale flow, subgrid-scale parameterizations are needed to estimate small-scale properties (e.g. boundary layer turbulence and convection, clouds, radiation), which have significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. To represent the contribution of these fine-scale processes to the resolved scale, climate models use various parameterizations, which are the main pieces in the model that contribute to the low clouds dynamics and therefore are the major sources of errors or approximations in their representation. In this project, we aim to 1) improve our understanding of the physical processes in thermal circulation and cloud formation, 2) examine the performance and sensitivity of various parameterizations in the regional weather model (Weather Research and Forecasting model; WRF), and 3) develop, implement, and evaluate the advanced boundary layer parameterization in the regional model to better represent stratocumulus, shallow cumulus, and their transition. Thus, this project includes three major corresponding studies. We find that the mean diurnal cycle is sensitive to model domain in ways that reveal the existence of different contributions originating from the Southeast Pacific land-masses. The experiments suggest that diurnal variations in circulations and thermal structures over this region are influenced by convection over the Peruvian sector of the Andes cordillera, while
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tilmes, Simone; Lamarque, Jean -Francois; Emmons, Louisa K.; Kinnison, Doug E.; Marsh, Dan; Garcia, Rolando R.; Smith, Anne K.; Neely, Ryan R.; Conley, Andrew; Vitt, Francis; et al
2016-05-20
The Community Earth System Model (CESM1) CAM4-chem has been used to perform the Chemistry Climate Model Initiative (CCMI) reference and sensitivity simulations. In this model, the Community Atmospheric Model version 4 (CAM4) is fully coupled to tropospheric and stratospheric chemistry. Details and specifics of each configuration, including new developments and improvements are described. CESM1 CAM4-chem is a low-top model that reaches up to approximately 40 km and uses a horizontal resolution of 1.9° latitude and 2.5° longitude. For the specified dynamics experiments, the model is nudged to Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalysis. We summarize the performance ofmore » the three reference simulations suggested by CCMI, with a focus on the last 15 years of the simulation when most observations are available. Comparisons with selected data sets are employed to demonstrate the general performance of the model. We highlight new data sets that are suited for multi-model evaluation studies. Most important improvements of the model are the treatment of stratospheric aerosols and the corresponding adjustments for radiation and optics, the updated chemistry scheme including improved polar chemistry and stratospheric dynamics and improved dry deposition rates. These updates lead to a very good representation of tropospheric ozone within 20 % of values from available observations for most regions. In particular, the trend and magnitude of surface ozone is much improved compared to earlier versions of the model. Furthermore, stratospheric column ozone of the Southern Hemisphere in winter and spring is reasonably well represented. In conclusion, all experiments still underestimate CO most significantly in Northern Hemisphere spring and show a significant underestimation of hydrocarbons based on surface observations.« less
Morrison, PI Hugh
2012-09-21
This is the first meeting of the whole new GEWEX (Global Energy and Water Cycle Experiment) Atmospheric System Study (GASS) project that has been formed from the merger of the GEWEX Cloud System Study (GCSS) Project and the GEWEX Atmospheric Boundary Layer Studies (GABLS). As such, this meeting will play a major role in energizing GEWEX work in the area of atmospheric parameterizations of clouds, convection, stable boundary layers, and aerosol-cloud interactions for the numerical models used for weather and climate projections at both global and regional scales. The representation of these processes in models is crucial to GEWEX goals of improved prediction of the energy and water cycles at both weather and climate timescales. This proposal seeks funds to be used to cover incidental and travel expenses for U.S.-based graduate students and early career scientists (i.e., within 5 years of receiving their highest degree). We anticipate using DOE funding to support 5-10 people. We will advertise the availability of these funds by providing a box to check for interested participants on the online workshop registration form. We will also send a note to our participants' mailing lists reminding them that the funds are available and asking senior scientists to encourage their more junior colleagues to participate. All meeting participants are encouraged to submit abstracts for oral or poster presentations. The science organizing committee (see below) will base funding decisions on the relevance and quality of these abstracts, with preference given to under-represented populations (especially women and minorities) and to early career scientists being actively mentored at the meeting (e.g. students or postdocs attending the meeting with their advisor).
No, H.C.; Kazimi, M.S.
1983-03-01
This work involves the development of physical models for the constitutive relations of a two-fluid, three-dimensional sodium boiling code, THERMIT-6S. The code is equipped with a fluid conduction model, a fuel pin model, and a subassembly wall model suitable for stimulating LMFBR transient events. Mathematically rigorous derivations of time-volume averaged conservation equations are used to establish the differential equations of THERMIT-6S. These equations are then discretized in a manner identical to the original THERMIT code. A virtual mass term is incorporated in THERMIT-6S to solve the ill-posed problem. Based on a simplified flow regime, namely cocurrent annular flow, constitutive relations for two-phase flow of sodium are derived. The wall heat transfer coefficient is based on momentum-heat transfer analogy and a logarithmic law for liquid film velocity distribution. A broad literature review is given for two-phase friction factors. It is concluded that entrainment can account for some of the discrepancies in the literature. Mass and energy exchanges are modelled by generalization of the turbulent flux concept. Interfacial drag coefficients are derived for annular flows with entrainment. Code assessment is performed by simulating three experiments for low flow-high power accidents and one experiment for low flow/low power accidents in the LMFBR. While the numerical results for pre-dryout are in good agreement with the data, those for post-dryout reveal the need for improvement of the physical models. The benefits of two-dimensional non-equilibrium representation of sodium boiling are studied.
Annual Energy Outlook [U.S. Energy Information Administration (EIA)]
Revenue Code of 1986 and engage in lobbying activities after December 31, 1995 shall not be eligible for the receipt of Federal funds constituting an award, grant, or loan. ...
White, James M.; Faber, Vance; Saltzman, Jeffrey S.
1992-01-01
An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes which represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete lookup table (LUT) where an 8-bit data signal is enabled to form a display of 24-bit color values. The LUT is formed in a sampling and averaging process from the image color values with no requirement to define discrete Voronoi regions for color compression. Image color values are assigned 8-bit pointers to their closest LUT value whereby data processing requires only the 8-bit pointer value to provide 24-bit color values from the LUT.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tang, G.; Yuan, F.; Bisht, G.; Hammond, G. E.; Lichtner, P. C.; Kumar, J.; Mills, R. T.; Xu, X.; Andre, B.; Hoffman, F. M.; et al
2015-12-17
tight relative update tolerance. As some biogeochemical processes (e.g., methane and nitrous oxide production and consumption) involve very low half saturation and threshold concentrations, this work provides insights for addressing nonphysical negativity issues and facilitates the representation of a mechanistic biogeochemical description in earth system models to reduce climate prediction uncertainty.« less
Mitchell, David L.
2013-09-05
It is well known that cirrus clouds play a major role in regulating the earth’s climate, but the details of how this works are just beginning to be understood. This project targeted the main property of cirrus clouds that influence climate processes; the ice fall speed. That is, this project improves the representation of the mass-weighted ice particle fall velocity, V_{m}, in climate models, used to predict future climate on global and regional scales. Prior to 2007, the dominant sizes of ice particles in cirrus clouds were poorly understood, making it virtually impossible to predict how cirrus clouds interact with sunlight and thermal radiation. Due to several studies investigating the performance of optical probes used to measure the ice particle size distribution (PSD), as well as the remote sensing results from our last ARM project, it is now well established that the anomalously high concentrations of small ice crystals often reported prior to 2007 were measurement artifacts. Advances in the design and data processing of optical probes have greatly reduced these ice artifacts that resulted from the shattering of ice particles on the probe tips and/or inlet tube, and PSD measurements from one of these improved probes (the 2-dimensional Stereo or 2D-S probe) are utilized in this project to parameterize V_{m} for climate models. Our original plan in the proposal was to parameterize the ice PSD (in terms of temperature and ice water content) and ice particle mass and projected area (in terms of mass- and area-dimensional power laws or m-D/A-D expressions) since these are the microphysical properties that determine V_{m}, and then proceed to calculate V_{m} from these parameterized properties. But the 2D-S probe directly measures ice particle projected area and indirectly estimates ice particle mass for each size bin. It soon became apparent that the original plan would introduce more uncertainty in the V_{m} calculations
Yang, Aileen; Hoek, Gerard; Montagne, Denise; Leseman, Daan L.A.C.; Hellack, Bryan; Kuhlbusch, Thomas A.J.; Cassee, Flemming R.; Brunekreef, Bert; Janssen, Nicole A.H.
2015-07-15
Oxidative potential (OP) of ambient particulate matter (PM) has been suggested as a health-relevant exposure metric. In order to use OP for exposure assessment, information is needed about how well central site OP measurements and modeled average OP at the home address reflect temporal and spatial variation of personal OP. We collected 96-hour personal, home outdoor and indoor PM{sub 2.5} samples from 15 volunteers living either at traffic, urban or regional background locations in Utrecht, the Netherlands. OP was also measured at one central reference site to account for temporal variations. OP was assessed using electron spin resonance (OP{sup ESR}) and dithiothreitol (OP{sup DTT}). Spatial variation of average OP at the home address was modeled using land use regression (LUR) models. For both OP{sup ESR} and OP{sup DTT}, temporal correlations of central site measurements with home outdoor measurements were high (R>0.75), and moderate to high (R=0.49–0.70) with personal measurements. The LUR model predictions for OP correlated significantly with the home outdoor concentrations for OP{sup DTT} and OP{sup ESR} (R=0.65 and 0.62, respectively). LUR model predictions were moderately correlated with personal OP{sup DTT} measurements (R=0.50). Adjustment for indoor sources, such as vacuum cleaning and absence of fume-hood, improved the temporal and spatial agreement with measured personal exposure for OP{sup ESR}. OP{sup DTT} was not associated with any indoor sources. Our study results support the use of central site OP for exposure assessment of epidemiological studies focusing on short-term health effects. - Highlights: • Oxidative potential (OP) of PM was proposed as a health-relevant exposure metric. • We evaluated the relationship between measured and modeled outdoor and personal OP. • Temporal correlations of central site with personal OP are moderate to high. • Adjusting for indoor sources improved the agreement with personal OP. • Our results
Huang, Shao Hui; O'Sullivan, Brian; Ringash, Jolie; Hope, Andrew; Gilbert, Ralph; Irish, Jonathan; Perez-Ordonez, Bayardo; Weinreb, Ilan; Waldron, John
2013-12-01
Purpose: To compare the temporal lymph node (LN) regression and regional control (RC) after primary chemoradiation therapy/radiation therapy in human papillomavirus-related [HPV(+)] versus human papillomavirus-unrelated [HPV(?)] head-and-neck cancer (HNC). Methods and Materials: All cases of N2-N3 HNC treated with radiation therapy/chemoradiation therapy between 2003 and 2009 were reviewed. Human papillomavirus status was ascertained by p16 staining on all available oropharyngeal cancers. Larynx/hypopharynx cancers were considered HPV(?). Initial radiologic complete nodal response (CR) (?1.0 cm 8-12 weeks after treatment), ultimate LN resolution, and RC were compared between HPV(+) and HPV(?) HNC. Multivariate analysis identified outcome predictors. Results: A total of 257 HPV(+) and 236 HPV(?) HNCs were identified. The initial LN size was larger (mean, 2.9 cm vs 2.5 cm; P<.01) with a higher proportion of cystic LNs (38% vs 6%, P<.01) in HPV(+) versus HPV(?) HNC. CR was achieved is 125 HPV(+) HNCs (49%) and 129 HPV(?) HNCs (55%) (P=.18). The mean post treatment largest LN was 36% of the original size in the HPV(+) group and 41% in the HPV(?) group (P<.01). The actuarial LN resolution was similar in the HPV(+) and HPV(?) groups at 12 weeks (42% and 43%, respectively), but it was higher in the HPV(+) group than in the HPV(?) group at 36 weeks (90% vs 77%, P<.01). The median follow-up period was 3.6 years. The 3-year RC rate was higher in the HPV(?) CR cases versus non-CR cases (92% vs 63%, P<.01) but was not different in the HPV(+) CR cases versus non-CR cases (98% vs 92%, P=.14). On multivariate analysis, HPV(+) status predicted ultimate LN resolution (odds ratio, 1.4 [95% confidence interval, 1.1-1.7]; P<.01) and RC (hazard ratio, 0.3 [95% confidence interval 0.2-0.6]; P<.01). Conclusions: HPV(+) LNs involute more quickly than HPV(?) LNs but undergo a more prolonged process to eventual CR beyond the time of initial assessment at 8 to 12 weeks after treatment. Post
Jeffcoat, David B.; DePrince, A. Eugene
2014-12-07
Propagating the equations of motion (EOM) for the one-electron reduced-density matrix (1-RDM) requires knowledge of the corresponding two-electron RDM (2-RDM). We show that the indeterminacy of this expression can be removed through a constrained optimization that resembles the variational optimization of the ground-state 2-RDM subject to a set of known N-representability conditions. Electronic excitation energies can then be obtained by propagating the EOM for the 1-RDM and following the dipole moment after the system interacts with an oscillating external electric field. For simple systems with well-separated excited states whose symmetry differs from that of the ground state, excitation energies obtained from this method are comparable to those obtained from full configuration interaction computations. Although the optimized 2-RDM satisfies necessary N-representability conditions, the procedure cannot guarantee a unique mapping from the 1-RDM to the 2-RDM. This deficiency is evident in the mean-field-quality description of transitions to states of the same symmetry as the ground state, as well as in the inability of the method to describe Rabi oscillations.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Fernando, Sudarshan; Gnaydin, Murat
2014-11-28
We study the minimal unitary representation (minrep) of SO(5, 2), obtained by quantization of its geometric quasiconformal action, its deformations and supersymmetric extensions. The minrep of SO(5, 2) describes a massless conformal scalar field in five dimensions and admits a unique deformation which describes a massless conformal spinor. Scalar and spinor minreps of SO(5, 2) are the 5d analogs of Diracs singletons of SO(3, 2). We then construct the minimal unitary representation of the unique 5d supercon-formal algebra F(4) with the even subalgebra SO(5, 2) SU(2). The minrep of F(4) describes a massless conformal supermultiplet consisting of two scalar andmoreone spinor fields. We then extend our results to the construction of higher spin AdS6/CFT5 (super)-algebras. The Joseph ideal of the minrep of SO(5, 2) vanishes identically as operators and hence its enveloping algebra yields the AdS6/CFT5 bosonic higher spin algebra directly. The enveloping algebra of the spinor minrep defines a deformed higher spin algebra for which a deformed Joseph ideal vanishes identically as operators. These results are then extended to the construction of the unique higher spin AdS6/CFT5 superalgebra as the enveloping algebra of the minimal unitary realization of F(4) obtained by the quasiconformal methods.less
Wagner, A.F.; Schatz, G.C.; Bowman, J.M.
1981-05-01
The DIM surface of Whitlock, Muckerman, and Fisher for the O(/sup 3/P)+H/sub 2/ system is used as a test case to evaluate the usefulness of a variety of fitting functions for the representation of potential energy surfaces. Fitting functions based on LEPS, BEBO, and rotated Morse oscillator (RMO) forms are examined. Fitting procedures are developed for combining information about a small portion of the surface and the fitting function to predict where on the surface more information must be obtained to improve the accuracy of the fit. Both unbiased procedures and procedures heavily biased toward the saddle point region of the surface are investigated. Collinear quasiclassical trajectory calculations of the reaction rate constant and one and three dimensional transition state theory rate constant calculations are performed and compared for selected fits and the exact DIM test surface. Fitting functions based on BEBO and RMO forms are found to give quite accurate results.
Tang, Guoping; Yuan, Fengming; Bisht, Gautam; Hammond, Glenn E.; Lichtner, Peter C.; Collier, Nathaniel O.; Kumar, Jitendra; Mills, Richard T.; Xu, Xiaofeng; Andre, Ben; Hoffman, Forrest M.; Painter, Scott L.; Thornton, Peter E.
2016-01-01
Reactive transport codes (e.g., PFLOTRAN) are increasingly used to improve the representation of biogeochemical processes in terrestrial ecosystem models (e.g., the Community Land Model, CLM). As CLM and PFLOTRAN use explicit and implicit time stepping, implementation of CLM biogeochemical reactions in PFLOTRAN can result in negative concentration, which is not physical and can cause numerical instability and errors. The objective of this work is to address the nonnegativity challenge to obtain accurate, efficient, and robust solutions. We illustrate the implementation of a reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant nitrogen uptake reactions and test the implementation at arctic, temperate, and tropical sites. We examine use of scaling back the update during each iteration (SU), log transformation (LT), and downregulating the reaction rate to account for reactant availability limitation to enforce nonnegativity. Both SU and LT guarantee nonnegativity but with implications. When a very small scaling factor occurs due to either consumption or numerical overshoot, and the iterations are deemed converged because of too small an update, SU can introduce excessive numerical error. LT involves multiplication of the Jacobian matrix by the concentration vector, which increases the condition number, decreases the time step size, and increases the computational cost. Neither SU nor SE prevents zero concentration. When the concentration is close to machine precision or 0, a small positive update stops all reactions for SU, and LT can fail due to a singular Jacobian matrix. The consumption rate has to be downregulated such that the solution to the mathematical representation is positive. A first-order rate downregulates consumption and is nonnegative, and adding a residual concentration makes it positive. For zero-order rate or when the reaction rate is not a function of a reactant, representing the availability limitation of each
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tang, Guoping; Yuan, Fengming; Bisht, Gautam; Hammond, Glenn E.; Lichtner, Peter C.; Collier, Nathaniel O.; Kumar, Jitendra; Mills, Richard T.; Xu, Xiaofeng; Andre, Ben; et al
2016-01-01
Reactive transport codes (e.g., PFLOTRAN) are increasingly used to improve the representation of biogeochemical processes in terrestrial ecosystem models (e.g., the Community Land Model, CLM). As CLM and PFLOTRAN use explicit and implicit time stepping, implementation of CLM biogeochemical reactions in PFLOTRAN can result in negative concentration, which is not physical and can cause numerical instability and errors. The objective of this work is to address the nonnegativity challenge to obtain accurate, efficient, and robust solutions. We illustrate the implementation of a reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant nitrogen uptake reactions and test the implementation atmore » arctic, temperate, and tropical sites. We examine use of scaling back the update during each iteration (SU), log transformation (LT), and downregulating the reaction rate to account for reactant availability limitation to enforce nonnegativity. Both SU and LT guarantee nonnegativity but with implications. When a very small scaling factor occurs due to either consumption or numerical overshoot, and the iterations are deemed converged because of too small an update, SU can introduce excessive numerical error. LT involves multiplication of the Jacobian matrix by the concentration vector, which increases the condition number, decreases the time step size, and increases the computational cost. Neither SU nor SE prevents zero concentration. When the concentration is close to machine precision or 0, a small positive update stops all reactions for SU, and LT can fail due to a singular Jacobian matrix. The consumption rate has to be downregulated such that the solution to the mathematical representation is positive. A first-order rate downregulates consumption and is nonnegative, and adding a residual concentration makes it positive. For zero-order rate or when the reaction rate is not a function of a reactant, representing the availability limitation
Anchordoqui, Luis A.; Goldberg, Haim; Huang, Xing; Vlcek, Brian J.
2014-06-17
The tensor-to-scalar ratio (r=0.20{sub −0.05}{sup +0.07}) inferred from the excess B-mode power observed by the Background Imaging of Cosmic Extragalactic Polarization (BICEP2) experiment is almost twice as large as the 95% CL upper limits derived from temperature measurements of the WMAP (r<0.13) and Planck (r<0.11) space missions. Very recently, it was suggested that additional relativistic degrees of freedom beyond the three active neutrinos and photons can help to relieve this tension: the data favor an effective number of light neutrino species N{sub eff}=3.86±0.25. Since the BICEP2 ratio implies the energy scale of inflation (V{sub ∗}{sup 1/4}∼2×10{sup 16} GeV) is comparable to the grand unification scale, in this paper we investigate whether we can accommodate the required N{sub eff} with three right-handed (partners of the left-handed standard model) neutrinos living in the fundamental representation of a grand unified exceptional E{sub 6} group. We show that the superweak interactions of these Dirac states (through their coupling to a TeV-scale Z{sup ′} gauge boson) lead to decoupling of right-handed neutrino just above the QCD cross over transition: 175 MeV≲T{sub ν{sub R}{sup dec}}≲250 MeV. For decoupling in this transition region, the contribution of the three right-handed neutrinos to N{sub eff} is suppressed by heating of the left-handed neutrinos (and photons). Consistency (within 1σ) with the favored N{sub eff} is achieved for 4.5 TeV
Generalized REGression Package for Nonlinear Parameter Estimation
Energy Science and Technology Software Center (OSTI)
1995-05-15
GREG computes modal (maximum-posterior-density) and interval estimates of the parameters in a user-provided Fortran subroutine MODEL, using a user-provided vector OBS of single-response observations or matrix OBS of multiresponse observations. GREG can also select the optimal next experiment from a menu of simulated candidates, so as to minimize the volume of the parametric inference region based on the resulting augmented data set.
Interpolations of nuclide-specific scattering kernels generated with Serpent
Scopatz, A.; Schneider, E.
2012-07-01
The neutron group-to-group scattering cross section is an essential input parameter for any multi-energy group physics model. However, if the analyst prefers to use Monte Carlo transport to generate group constants this data is difficult to obtain for a single species of a material. Here, the Monte Carlo code Serpent was modified to return the group transfer probabilities on a per-nuclide basis. This ability is demonstrated in conjunction with an essential physics reactor model where cross section perturbations are used to dynamically generate reactor state dependent group constants via interpolation from pre-computed libraries. The modified version of Serpent was therefore verified with three interpolation cases designed to test the resilience of the interpolation scheme to changes in intra-group fluxes. For most species, interpolation resulted in errors of less than 5% of transport-computed values. For important scatterers, such as {sup 1}H, errors less than 2% were observed. For nuclides with high errors ( > 10%), the scattering channel typically only had a small probability of occurring. (authors)
Simulation Problem Analysis and Research Kernel | Open Energy...
updating Image needs updating Reference needed Missing content Broken link Other Additional Comments Cancel Submit Categories: Tools Stubs Articles with outstanding TODO tasks...
Perturbation kernels for generalized seismological data functionals (GSDF)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Federal Fleet Manager Survey Produced for the U.S. Department of Energy (DOE) by the National Renewable Energy Laboratory (NREL), a U.S. DOE national laboratory Perspectives on AFVs: N T Y A U E O F E N E R G D E P A R T M E N I T E D S T A T S O F A E R I C M Perspectives on AFVs i Table of Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Survey Development, Implementation, and Data Analysis . . . .
Kernel Integration Code System--Multigroup Gamma-Ray Scattering.
Energy Science and Technology Software Center (OSTI)
1988-02-15
GGG (G3) is the generic designation for a series of computer programs that enable the user to estimate gamma-ray scattering from a point source to a series of point detectors. Program output includes detector response due to each source energy, as well as a grouping by scattered energy in addition to a simple, unscattered beam result. Although G3 is basically a single-scatter program, it also includes a correction for multiple scattering by applying a buildupmore » factor for the path segment between the point of scatter and the detector point. Results are recorded with and without the buildup factor. Surfaces, defined by quadratic equations, are used to provide for a full three-dimensional description of the physical geometry. G3 evaluates scattering effects in those situations where more exact techniques are not economical. G3 was revised by Bettis and the name was changed to indicate that it was no longer identical to the G3 program. The name S3 was chosen since the scattering calculation has three steps: calculation of the flux arriving at the scatterer from the point source, calculation of the differential scattering cross section, and calculation of the scattered flux arriving at the detector.« less
Xu, T.T.; Sathaye, J.; Galitsky, C.
2010-09-30
Adoption of efficient end-use technologies is one of the key measures for reducing greenhouse gas (GHG) emissions. With the working of energy programs and policies on carbon regulation, how to effectively analyze and manage the costs associated with GHG reductions become extremely important for the industry and policy makers around the world. Energy-climate (EC) models are often used for analyzing the costs of reducing GHG emissions (e.g., carbon emission) for various emission-reduction measures, because an accurate estimation of these costs is critical for identifying and choosing optimal emission reduction measures, and for developing related policy options to accelerate market adoption and technology implementation. However, accuracies of assessing of GHG-emission reduction costs by taking into account the adoption of energy efficiency technologies will depend on how well these end-use technologies are represented in integrated assessment models (IAM) and other energy-climate models. In this report, we first conduct brief overview on different representations of end-use technologies (mitigation measures) in various energy-climate models, followed by problem statements, and a description of the basic concepts of quantifying the cost of conserved energy including integrating non-regrets options. A non-regrets option is defined as a GHG reduction option that is cost effective, without considering their additional benefits related to reducing GHG emissions. Based upon these, we develop information on costs of mitigation measures and technological change. These serve as the basis for collating the data on energy savings and costs for their future use in integrated assessment models. In addition to descriptions of the iron and steel making processes, and the mitigation measures identified in this study, the report includes tabulated databases on costs of measure implementation, energy savings, carbon-emission reduction, and lifetimes. The cost curve data on mitigation
Babcock, Kerry; Sidhu, Narinder
2010-02-15
Purpose: Due to limitations in computer memory and computation time, typical radiation therapy treatments are calculated with a voxel dimension on the order of several millimeters. The anatomy below this practical resolution is approximated as a homogeneous region uniform in atomic composition and density. The purpose of this article is to examine whether the exclusion of anatomic structure below the practical dose calculation resolution produces deviations in the resulting dose distributions. Methods: EGSnrc calculated dose distributions from the BRANCH lung model of Part I are compared and contrasted to dose distributions from a CT representation of the same BRANCH model for three different phases of the respiration cycle. Results: The exclusion of branching structures below a CT resolution of 1x1x2 mm{sup 3} resulted in a deviation in dose. The deviation in dose was as high as 14% but was localized around the branching structures. There was no significant variation in the dose deviation as a function of either field size or lung density. Conclusions: The exclusion of explicit branching structures of the lung in a CT representation creates localized deviations in dose. To ensure accurate dose calculations, CT resolution must be increased.
Energy Science and Technology Software Center (OSTI)
2002-07-15
SNL-ptc2acis translates Pro/Engineer descriptions of parts, assemblies, and cross-sections to ACIS representation. It is developed using Pro/Toolkit and the ACIS kernel. As such, it requires a Pro/Engineer license in order to execute, but is not subject to the issues of file encryption as a direct file reader would be.
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
148 L-2 FAR 52.215-1 INSTRUCTIONS TO OFFERORS -- COMPETITIVE ACQUISITION (JAN 2004) 150 L-3 FAR 52.216-1 TYPE OF CONTRACT (APR 1984) ............................................................................... 155 L-4 FAR 52.222-24 PREAWARD ON-SITE EQUAL OPPORTUNITY COMPLIANCE EVALUATION (FEB 1999) ................................................................................................................................................................ 155 L-5 FAR 52.233-2 SERVICE OF
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
.... 1 L-2 FAR 52.215-1 INSTRUCTIONS TO OFFERORS -- COMPETITIVE ACQUISITION (JAN 2004) ... 3 L-3 FAR 52.216-1 TYPE OF CONTRACT (APR 1984) ................................................................................... 8 L-4 FAR 52.222-24 PREAWARD ON-SITE EQUAL OPPORTUNITY COMPLIANCE EVALUATION (FEB 1999) .................................................................................................................................................................... 8 L-5 FAR 52.233-2
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
148 L-2 FAR 52.215-1 INSTRUCTIONS TO OFFERORS -- COMPETITIVE ACQUISITION (JAN 2004) 150 L-3 FAR 52.216-1 TYPE OF CONTRACT (APR 1984) ............................................................................... 155 L-4 FAR 52.222-24 PREAWARD ON-SITE EQUAL OPPORTUNITY COMPLIANCE EVALUATION (FEB 1999) ................................................................................................................................................................ 155 L-5 FAR 52.233-2 SERVICE OF
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
... 33.101 of the Federal Acquisition Regulation, that are filed directly with an ... solicitation of any Federal Acquisition Regulation (48 CFR Chapter 1) provision with an ...
A situated knowledge representation of geographical information
Gahegan, Mark N.; Pike, William A.
2006-11-01
In this paper we present an approach to conceiving of, constructing and comparing the concepts developed and used by geographers, environmental scientists and other earth science researchers to help describe, analyze and ultimately understand their subject of study. Our approach is informed by the situations under which concepts are conceived and applied, captures details of their construction, use and evolution and supports their ultimate sharing along with the means for deep exploration of conceptual similarities and differences that may arise among a distributed network of researchers. The intent here is to support different perspectives onto GIS resources that researchers may legitimately take, and to capture and compute with aspects of epistemology, to complement the ontologies that are currently receiving much attention in the GIScience community.
A representation for efficient temporal reasoning
Delgrande, J.P.; Gupta, A.
1996-12-31
It has been observed that the temporal reasoning component in a knowledge-based system is frequently a bottleneck. We investigate here a class of graphs appropriate for an interesting class of temporal domains and for which very efficient algorithms for reasoning are obtained, that of series-parallel graphs. These graphs can be used for example to model process execution, as well as various planning or scheduling activities. Events are represented by nodes of a graph and relationships are represented by edges labeled by {le} or <. Graphs are composed using a sequence of series and parallel steps (recursively) on series-parallel graphs. We show that there is an O(n) time preprocessing algorithm that allows us to answer queries about the events in O(l) time. Our results make use of a novel embedding of the graphs on the plane that is of independent interest. Finally we argue that these results may be incorporated in general graphs representing temporal events by extending the approach of Gerevini and Schubert.
Exploiting data representation for fault tolerance
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; Mueller, F.
2015-01-06
Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
... (2) The offeror shall enter, in the block with its name ... If this solicitation is amended, all terms and conditions ... To facilitate the Government's search for key words during ...
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
... (2) The offeror shall enter, in the block with its name ... If this solicitation is amended, all terms and conditions ... the Government's search for key words during ...
Method for contour extraction for object representation
Skourikhine, Alexei N.; Prasad, Lakshman
2005-08-30
Contours are extracted for representing a pixelated object in a background pixel field. An object pixel is located that is the start of a new contour for the object and identifying that pixel as the first pixel of the new contour. A first contour point is then located on the mid-point of a transition edge of the first pixel. A tracing direction from the first contour point is determined for tracing the new contour. Contour points on mid-points of pixel transition edges are sequentially located along the tracing direction until the first contour point is again encountered to complete tracing the new contour. The new contour is then added to a list of extracted contours that represent the object. The contour extraction process associates regions and contours by labeling all the contours belonging to the same object with the same label.
Exploiting data representation for fault tolerance
Hoemmen, Mark Frederick; Elliott, J.; Mueller, F.
2015-01-06
Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product's result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.
explicit representation of uncertainty in system load
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
system load - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & ... Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear ...
Statistical representation of clouds in climate models
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
approach for representing ice microphysics in bin and bulk schemes: Application to TWP-ICE deep convection Hugh Morrison and Wojciech Grabowski National Center for Atmospheric Research ARM STM, Monday, April 1, 2009 -1) Uncertainty of ice initiation processes -2) Wide range of ice particle characteristics (e.g., shape, effective density) -3) No clear separation of physical processes for small and large crystals The treatment of ice microphysics has a large impact on model simulations, e.g.,
PART IV-REPRESENTATIONS AND INSTRUCTIONS
National Nuclear Security Administration (NNSA)
... during performance, and through final payment of any contract, basic agreement, basic ... during performance, and through final payment of any contract, basic agreement, basic ...