National Library of Energy BETA

Sample records for automatic outage counts

  1. Outage Log

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Queue Look Scheduled Outages Outage Log Science Gateway Status Login Node Status ... It is a historical record and may not be updated while a system event is in progress. ...

  2. NERSC Scheduled System Outages

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scheduled System Outages NERSC Scheduled System Outages Last edited: 2016-04-29 11:35:00

  3. Improving Outage Performance: Outage Optimization Process

    SciTech Connect (OSTI)

    LaPlatney, Jere J.

    2006-07-01

    Planned outage performance is a key measure of how well an Nuclear Power Plant (NPP) is operated. Performance during planned outages strongly affects virtually all of a plant's performance metrics. In recognition of this fact, NPP operators worldwide have and continue to focus on improving their outage performance. The process of improving outage performance is commonly referred to as 'Outage Optimization' in the industry. This paper starts with a summary of the principles of Outage Optimization. It then provides an overview of a process in common use in the USA and elsewhere to manage the improvement of planned outages. The program described is comprehensive in that it involves managing improvement in both the Preparation and Execution phases of outage management. (author)

  4. Systems Outage Notification Policy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    the need for a maintenance event window no less than 24 hours in advance of the outage (emergency fixes). Users will be notified of regularly scheduled maintenance in advance, on ...

  5. Shopping for outage management systems

    SciTech Connect (OSTI)

    Chou, Y.C.; Konneker, L.K.; Watkins, T.R.

    1995-12-31

    Customer service is becoming increasingly important to electric utilities. Outage management is an important part of customer service. Good outage management means quickly responding to outages and keeping customers informed about outages. Each outage equals lost customer satisfaction and lost revenue. Outage management is increasingly important because of new competition among utilities for customers, pressure from regulators, and internal pressure to cut costs. The market has several existing software products for outage management. How does a utility judge whether these products satisfy their specific needs? Technology is changing rapidly to support outage management. Which technology is proven and cost-effective? The purpose of this paper is to outline the procedure for evaluating outage management systems, and to discuss the key features to look for. It also gives our opinion of the features that represent state of the art. This paper will not discuss specific products or list vendors names.

  6. Development of Improved Graphical Displays for an Advanced Outage Control Center, Employing Human Factors Principles for Outage Schedule Management

    SciTech Connect (OSTI)

    St Germain, Shawn Walter; Farris, Ronald Keith; Thomas, Kenneth David

    2015-09-01

    The long-term viability of existing nuclear power plants in the United States (U.S.) is dependent upon a number of factors, including maintaining high capacity factors, maintaining nuclear safety, and reducing operating costs, particularly those associated with refueling outages. Refueling outages typically take 20-30 days, and for existing light water NPPs in the U.S., the reactor cannot be in operation during the outage. Furthermore, given that many NPPs generate between $1-1.5 million/day in revenue when in operation, there is considerable interest in shortening the length of refueling outages. Yet refueling outages are highly complex operations, involving multiple concurrent and dependent activities that are somewhat challenging to coordinate; therefore, finding ways to improve refueling outage performance, while maintaining nuclear safety has proven to be difficult. The Advanced Outage Control Center (AOCC) project is a research and development (R&D) demonstration activity under the LWRS Program. LWRS is an R&D program that works closely with industry R&D programs to establish technical foundations for the licensing and managing of long-term, safe, and economical operation of current fleet of NPPs. As such, the LWRS Advanced Outage Control Center project has the goal of improving the management of commercial NPP refueling outages. To accomplish this goal, INL is developing an advanced outage control center (OCC) that is specifically designed to maximize the usefulness of communication and collaboration technologies for outage coordination and problem resolution activities. The overall focus is on developing an AOCC with the following capabilities that enables plant and OCC staff to; Collaborate in real-time to address emergent issues; Effectively communicate outage status to all workers involved in the outage; Effectively communicate discovered conditions in the field to the OCC; Provide real-time work status; Provide automatic pending support notifications

  7. Track NERSC Outages in Google Calendar

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Outages in Google Calendar Track NERSC Outages in Google Calendar March 22, 2013 by Jack Deslippe Outages are now available in Google calendar form. You can subscribe to this...

  8. OutageMapURL Phases Energy Services

    Open Energy Info (EERE)

    OutageMapURL Phases Energy Services County Electric Power Assn http outages county org A N Electric Coop Virginia AEP Generating Company https www aepaccount com zipr...

  9. outages | OpenEI Community

    Open Energy Info (EERE)

    outages Home Graham7781's picture Submitted by Graham7781(2017) Super contributor 29 October, 2012 - 14:46 East Coast Utilities prepare for Hurricane Sandy East Coast Hurricane...

  10. August 14, 2003 Power Outages … Announcement

    Broader source: Energy.gov (indexed) [DOE]

    Ellen P. Vancko evancko@nerc.com Power Outage Update 8162003 11 a.m. EDT The bulk ... will continue to experience rotating outages due to generating capacity availability. ...

  11. North American Electric Reliability Council Outage Announcement...

    Broader source: Energy.gov (indexed) [DOE]

    Recommendations Blackout 2003: Blackout Final Implementation Report U.S. - Canada Power System Outage Task Force: Final Report on the Implementation of Task Force Recommendations

  12. RESOLVED: Projectb filesystem outage July 9, 2012

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    filesystem outage July 9, 2012 July 9, 2012 The projectb filesystem had a hardware failure that potentially generated IO errors. The filesystem logs indicate that the...

  13. Property:OutagePhoneNumber | Open Energy Information

    Open Energy Info (EERE)

    OutagePhoneNumber Jump to: navigation, search Property Name OutagePhoneNumber Property Type String Description An outage hotline or 24-hour customer service number Note: uses...

  14. GUIDELINES FOR IMPLEMENTATION OF AN ADVANCED OUTAGE CONTROL CENTER TO IMPROVE OUTAGE COORDINATION, PROBLEM RESOLUTION, AND OUTAGE RISK MANAGEMENT

    SciTech Connect (OSTI)

    Germain, Shawn St; Farris, Ronald; Whaley, April M; Medema, Heather; Gertman, David

    2014-09-01

    This research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which is a research and development (R&D) program sponsored by Department of Energy (DOE) and performed in close collaboration with industry R&D programs that provide the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. The LWRS program serves to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. Managing NPP outages is a complex and difficult task due to the large number of maintenance and repair activities that are accomplished in a short period of time. During an outage, the outage control center (OCC) is the temporary command center for outage managers and provides several critical functions for the successful execution of the outage schedule. Essentially, the OCC functions to facilitate information inflow, assist outage management in processing information, and to facilitate the dissemination of information to stakeholders. Currently, outage management activities primarily rely on telephone communication, face to face reports of status, and periodic briefings in the OCC. It is a difficult task to maintain current the information related to outage progress and discovered conditions. Several advanced communication and collaboration technologies have shown promise for facilitating the information flow into, across, and out of the OCC. The use of these technologies will allow information to be shared electronically, providing greater amounts of real-time information to the decision makers and allowing OCC coordinators to meet with supporting staff remotely. Passively monitoring status electronically through advances in the areas of mobile worker technologies, computer-based procedures, and automated work packages will reduce the current reliance on manually

  15. Managing turbine-generator outages by computer

    SciTech Connect (OSTI)

    Reinhart, E.R. [Reinhart and Associates, Inc., Austin, TX (United States)

    1997-09-01

    This article describes software being developed to address the need for computerized planning and documentation programs that can help manage outages. Downsized power-utility companies and the growing demand for independent, competitive engineering and maintenance services have created a need for a computer-assisted planning and technical-direction program for turbine-generator outages. To meet this need, a software tool is now under development that can run on a desktop or laptop personal computer to assist utility personnel and technical directors in outage planning. Total Outage Planning Software (TOPS), which runs on Windows, takes advantage of the mass data storage available with compact-disc technology by archiving the complete outage documentation on CD. Previous outage records can then be indexed, searched, and viewed on a computer with the click of a mouse. Critical-path schedules, parts lists, parts order tracking, work instructions and procedures, custom data sheets, and progress reports can be generated by computer on-site during an outage.

  16. Property:OutageMapURL | Open Energy Information

    Open Energy Info (EERE)

    + Agralite Electric Coop + https:pyxis-oms.comOutageMapAgraliteOutageMap.html + Alfalfa Electric Coop, Inc + https:ebill.alfalfaelectric.comwoViewermapviewer.html?config...

  17. Homeowners: Respond to Power Outages | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Guidelines Homeowners: Respond to Power Outages Homeowners: Respond to Power Outages ... Learn more Certify your electrical systems-If your house sustains flood or wind damage ...

  18. A Review of Power Outages and Restoration Following the June...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    A Review of Power Outages and Restoration Following the June 2012 Derecho A Review of Power Outages and Restoration Following the June 2012 Derecho This report reviews power ...

  19. North American Electric Reliability Council Power Outage Update...

    Office of Environmental Management (EM)

    will continue to experience rotating outages due to generating capacity availability. North American Electric Reliability Council Power Outage Update (48.2 KB) More Documents & ...

  20. Track NERSC Outages in Google Calendar

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Track NERSC Outages in Google Calendar Track NERSC Outages in Google Calendar March 22, 2013 by Jack Deslippe Outages are now available in Google calendar form. You can subscribe to this calendar by following the link, http://goo.gl/A4n3k, and then clicking the add button on the bottom right. If you find any issues with the calendar content, please contact NERSC consultants by email at consult(at)nersc.gov. Subscribe via RSS Subscribe Browse by Date August 2016 June 2016 May 2016 April 2016

  1. Potomac River Project Outage Schedule Clarification | Department...

    Office of Environmental Management (EM)

    Re: Potomac River Generating Station Department of Energy, Case No. EO-05-01: Potomac Electric Power Company (PEPCO) evised plan for transmission outages for the 230 kV circuits ...

  2. Advanced Test Reactor outage risk assessment

    SciTech Connect (OSTI)

    Thatcher, T.A.; Atkinson, S.A.

    1997-12-31

    Beginning in 1997, risk assessment was performed for each Advanced Test Reactor (ATR) outage aiding the coordination of plant configuration and work activities (maintenance, construction projects, etc.) to minimize the risk of reactor fuel damage and to improve defense-in-depth. The risk assessment activities move beyond simply meeting Technical Safety Requirements to increase the awareness of risk sensitive configurations, to focus increased attention on the higher risk activities, and to seek cost-effective design or operational changes that reduce risk. A detailed probabilistic risk assessment (PRA) had been performed to assess the risk of fuel damage during shutdown operations including heavy load handling. This resulted in several design changes to improve safety; however, evaluation of individual outages had not been performed previously and many risk insights were not being utilized in outage planning. The shutdown PRA provided the necessary framework for assessing relative and absolute risk levels and assessing defense-in-depth. Guidelines were written identifying combinations of equipment outages to avoid. Screening criteria were developed for the selection of work activities to receive review. Tabulation of inherent and work-related initiating events and their relative risk level versus plant mode has aided identification of the risk level the scheduled work involves. Preoutage reviews are conducted and post-outage risk assessment is documented to summarize the positive and negative aspects of the outage with regard to risk. The risk for the outage is compared to the risk level that would result from optimal scheduling of the work to be performed and to baseline or average past performance.

  3. Outage project productivity improvement of TVA fossil

    SciTech Connect (OSTI)

    Picard, H.E.; Seay, C.R. Jr.

    1996-10-01

    Competition in the utility industry forces management to look closely at the cost effectiveness of power plant outage projects. At TVA Fossil and Hydro Power, innovative work measurement is proving effective as a project management tool to do more with less. Labor-hours to complete outage work scopes are reduced by some 20 to 30%, not by working harder or sacrificing safety, or quality, but by working and managing smarter. Fossil power plant outages and shutdowns are costly. They are labor-intensive construction projects, often with expanding work scope, and executed on a fast track. Outage work is inherently complex and dynamic, and often unpredictable. Many activities and tasks must be integrated, coordinated and completed safely and efficiently by multiple crafts and work groups. As a result, numerous productivity factors can influence the cost and schedule of outage completion. This provides owners, contractors and labor with unique opportunities for competitive advantage--by making radical changes in how they manage labor-hours and time.

  4. Contingency Analysis of Cascading Line Outage Events

    SciTech Connect (OSTI)

    Thomas L Baldwin; Magdy S Tawfik; Miles McQueen

    2011-03-01

    As the US power systems continue to increase in size and complexity, including the growth of smart grids, larger blackouts due to cascading outages become more likely. Grid congestion is often associated with a cascading collapse leading to a major blackout. Such a collapse is characterized by a self-sustaining sequence of line outages followed by a topology breakup of the network. This paper addresses the implementation and testing of a process for N-k contingency analysis and sequential cascading outage simulation in order to identify potential cascading modes. A modeling approach described in this paper offers a unique capability to identify initiating events that may lead to cascading outages. It predicts the development of cascading events by identifying and visualizing potential cascading tiers. The proposed approach was implemented using a 328-bus simplified SERC power system network. The results of the study indicate that initiating events and possible cascading chains may be identified, ranked and visualized. This approach may be used to improve the reliability of a transmission grid and reduce its vulnerability to cascading outages.

  5. Outage management and health physics issue, 2007

    SciTech Connect (OSTI)

    Agnihotri, Newal

    2007-05-15

    The focus of the May-June issue is on outage management and health physics. Major articles/reports in this issue include: India: a potential commercial opportunity, a U.S. Department of Commerce Report, by Joe Neuhoff and Justin Rathke; The changing climate for nuclear energy, by Skip Bowman, Nuclear Energy Insitute; Selecting protective clothing, by J. Mark Price, Southern California Edison; and Succssful refurbishment outage, by Sudesh K. Gambhir, Omaha Public Power District. Industry innovation articles in this issue are: Containment radiation monitoring spiking, by Michael W. Lantz and Robert Routolo, Arizona Public Service Company; Improved outage performance, by Michael Powell and Troy Wilfong, Arizona Public Service Company, Palo Verde Nuclear Generating Station; Stop repacking valves and achieve leak-free performance, by Kenneth Hart, PPL Susquehanna LLC; and Head assembly upgrade package, by Timothy Petit, Dominion Nuclear.

  6. Outage managment and health physics issue, 2008

    SciTech Connect (OSTI)

    Agnihotri, Newal

    2008-05-15

    The focus of the May-June issue is on outage management and health physics. Major articles include: Outage optimization initiatives, by George B. Beam, AREVA NP, Inc.; New plant based on excellent track records, by Jim Scarola, Progress Energy; Meeting customer needs and providing environmental benefits, by Peter S. Hastings, Duke Energy; Plants with 3-D design, by Jack A. Bailey, Tennessee Valley Authority; and Highest quality with exceptional planning, by Jason A. Walls, Duke Energy. Industry innovation articles include: Integrated exposure reduction plan, by Ed Wolfe, Exelon; Performance-based radiation worker training, by Joe Giuffre and Timothy Vriezerma, American Electric Power.

  7. RESOLVED: Projectb filesystem outage July 9, 2012

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    RESOLVED: Projectb filesystem outage July 9, 2012 RESOLVED: Projectb filesystem outage July 9, 2012 July 9, 2012 The projectb filesystem had a hardware failure that potentially generated I/O errors. The filesystem logs indicate that the earliest abnormal event on the filesystem occurred at 9:19AM and the filesystem was taken down for maintenance at 10:42AM. The filesystem returned to service at 11:20AM. Jobs running on the cluster would not have been able to read from or write to the projectb

  8. Refinery Outages: First-Half 2016

    U.S. Energy Information Administration (EIA) Indexed Site

    Outages: First-Half 2016 March 2016 Independent Statistics & Analysis www.eia.gov U.S. Department of Energy Washington, DC 20585 U.S. Energy Information Administration | Refinery Outages: First-Half 2016 i This report was prepared by the U.S. Energy Information Administration (EIA), the statistical and analytical agency within the U.S. Department of Energy. By law, EIA's data, analyses, and forecasts are independent of approval by any other officer or employee of the United States

  9. Benchmark Report on Key Outage Attributes: An Analysis of Outage Improvement Opportunities and Priorities

    SciTech Connect (OSTI)

    Germain, Shawn St.; Farris, Ronald

    2014-09-01

    Advanced Outage Control Center (AOCC), is a multi-year pilot project targeted at Nuclear Power Plant (NPP) outage improvement. The purpose of this pilot project is to improve management of NPP outages through the development of an AOCC that is specifically designed to maximize the usefulness of communication and collaboration technologies for outage coordination and problem resolution activities. This report documents the results of a benchmarking effort to evaluate the transferability of technologies demonstrated at Idaho National Laboratory and the primary pilot project partner, Palo Verde Nuclear Generating Station. The initial assumption for this pilot project was that NPPs generally do not take advantage of advanced technology to support outage management activities. Several researchers involved in this pilot project have commercial NPP experience and believed that very little technology has been applied towards outage communication and collaboration. To verify that the technology options researched and demonstrated through this pilot project would in fact have broad application for the US commercial nuclear fleet, and to look for additional outage management best practices, LWRS program researchers visited several additional nuclear facilities.

  10. AUTOMATIC COUNTER

    DOE Patents [OSTI]

    Robinson, H.P.

    1960-06-01

    An automatic counter of alpha particle tracks recorded by a sensitive emulsion of a photographic plate is described. The counter includes a source of mcdulated dark-field illumination for developing light flashes from the recorded particle tracks as the photographic plate is automatically scanned in narrow strips. Photoelectric means convert the light flashes to proportional current pulses for application to an electronic counting circuit. Photoelectric means are further provided for developing a phase reference signal from the photographic plate in such a manner that signals arising from particle tracks not parallel to the edge of the plate are out of phase with the reference signal. The counting circuit includes provision for rejecting the out-of-phase signals resulting from unoriented tracks as well as signals resulting from spurious marks on the plate such as scratches, dust or grain clumpings, etc. The output of the circuit is hence indicative only of the tracks that would be counted by a human operator.

  11. Hopper compilers and DDT short outage next Wed, May 16

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    compilers and DDT short outage next Wed, May 16 Hopper compilers and DDT short outage next Wed, May 16 May 10, 2012 Due to a scheduled maintenance for the License Servers, most of...

  12. Outage management and health physics issue, 2009

    SciTech Connect (OSTI)

    Agnihotri, Newal

    2009-05-15

    The focus of the May-June issue is on outage management and health physics. Major articles include the following: Planning and scheduling to minimize refueling outage, by Pat McKenna, AmerenUE; Prioritizing safety, quality and schedule, by Tom Sharkey, Dominion; Benchmarking to high standards, by Margie Jepson, Energy Nuclear; Benchmarking against U.S. standards, by Magnox North, United Kingdom; Enabling suppliers for new build activity, by Marcus Harrington, GE Hitachi Nuclear Energy; Identifying, cultivating and qualifying suppliers, by Thomas E. Silva, AREVA NP; Creating new U.S. jobs, by Francois Martineau, Areva NP. Industry innovation articles include: MSL Acoustic source load reduction, by Amir Shahkarami, Exelon Nuclear; Dual Methodology NDE of CRDM nozzles, by Michael Stark, Dominion Nuclear; and Electronic circuit board testing, by James Amundsen, FirstEnergy Nuclear Operating Company. The plant profile article is titled The future is now, by Julia Milstead, Progress Energy Service Company, LLC.

  13. Outage management and health physics issue, 2006

    SciTech Connect (OSTI)

    Agnihotri, Newal

    2006-05-15

    The focus of the May-June issue is on outage management and health physics. Major articles/reports in this issue include: A design with experience for the U.S., by Michael J. Wallace, Constellation Generation Group; Hope to be among the first, by Randy Hutchinson, Entergy Nuclear; Plans to file COLs in 2008, by Garry Miller, Progress Energy; Evolution of ICRP's recommendations, by Lars-Erik Holm, ICRP; European network on education and training in radiological protection, by Michele Coeck, SCK-CEN, Belgium; Outage managment: an important tool for improving nuclear power plant performance, by Thomas Mazour and Jiri Mandula, IAEA, Austria; and Plant profile: Exploring new paths to excellence, by Anne Thomas, Exelon Nuclear.

  14. Advanced Outage and Control Center: Strategies for Nuclear Plant Outage Work Status Capabilities

    SciTech Connect (OSTI)

    Gregory Weatherby

    2012-05-01

    The research effort is a part of the Light Water Reactor Sustainability (LWRS) Program. LWRS is a research and development program sponsored by the Department of Energy, performed in close collaboration with industry to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. The LWRS Program serves to help the US nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. The Outage Control Center (OCC) Pilot Project was directed at carrying out the applied research for development and pilot of technology designed to enhance safe outage and maintenance operations, improve human performance and reliability, increase overall operational efficiency, and improve plant status control. Plant outage management is a high priority concern for the nuclear industry from cost and safety perspectives. Unfortunately, many of the underlying technologies supporting outage control are the same as those used in the 1980’s. They depend heavily upon large teams of staff, multiple work and coordination locations, and manual administrative actions that require large amounts of paper. Previous work in human reliability analysis suggests that many repetitive tasks, including paper work tasks, may have a failure rate of 1.0E-3 or higher (Gertman, 1996). With between 10,000 and 45,000 subtasks being performed during an outage (Gomes, 1996), the opportunity for human error of some consequence is a realistic concern. Although a number of factors exist that can make these errors recoverable, reducing and effectively coordinating the sheer number of tasks to be performed, particularly those that are error prone, has the potential to enhance outage efficiency and safety. Additionally, outage management requires precise coordination of work groups that do not always share similar objectives. Outage

  15. How individual traces and interactive timelines could support outage execution - Toward an outage historian concept

    SciTech Connect (OSTI)

    Parfouru, S.; De-Beler, N.

    2012-07-01

    In the context of a project that is designing innovative ICT-based solutions for the organizational concept of outage management, we focus on the informational process of the OCR (Outage Control Room) underlying the execution of the outages. Informational process are based on structured and unstructured documents that have a key role in the collaborative processes and management of the outage. We especially track the structured and unstructured documents, electronically or not, from creation to sharing. Our analysis allows us to consider that the individual traces produced by an individual participant with a specific role could be multi-purpose and support sharing between participants without creating duplication of work. The ultimate goal is to be able to generate an outage historian, that is not just focused on highly structured information, which could be useful to improve the continuity of information between participants. We study the implementation of this approach through web technologies and social media tools to address this issue. We also investigate the issue of data access through interactive visualization timelines coupled with other modality's to assist users in the navigation and exploration of the proposed historian. (authors)

  16. Application of Standard Maintenance Windows in PHWR Outage

    SciTech Connect (OSTI)

    Fuming Jiang

    2006-07-01

    The concept of Standard Maintenance Windows has been widely used in the planned outage of light water reactor in the world. However, due to the specific feature of Pressurized Heavy Water Reactor (PHWR), it has not come to a consensus for the PHWR owners to adopt Standard Maintenance Windows for planned outage aiming at the optimization of outage duration. Third Qinshan Nuclear Power Company (TQNPC), with their experience gained in the previous outages and with reference to other PHWR power plants, has identified a set of Standard Maintenance Windows for planned outage. It can be applied to similar PHWR plants and with a few windows that are specific to Qinshan Phase III NPP. The use of these Standard Maintenance Windows in planned outage has been proved to be effective in control shutdown nuclear safety, minimize the unavailability of safety system, improve the efficient utilization of outage duration, and improved the flexibility of outage schedule in the case of emergency issue, which forced the revision of outage schedule. It has also formed a solid foundation for benchmarking. The identification of Standard Maintenance Windows and its application will be discussed with relevant cases for the common improvement of outage duration. (author)

  17. Plant maintenance and outage management issue, 2005

    SciTech Connect (OSTI)

    Agnihotri, Newal (ed.)

    2005-01-15

    The focus of the January-February issue is on plant maintenance and outage managment. Major articles/reports in this issue include: Dawn of a new era, by Joe Colvin, Nuclear Energy Institute (NEI); Plant profile: Beloyarsk NPP, Russia, by Nikolai Oshkanov, Beloyarsk NPP, Russia; Improving economic performance, by R. Spiegelberg-Planner, John De Mella, and Marius Condu, IAEA; A model for improving performance, by Pet Karns, MRO Software; ASME codes and standards, by Shannon Burke, ASME International; and, Refurbishment programs, by Craig S. Irish, Nuclear Logistics, Inc.

  18. SAS Output

    U.S. Energy Information Administration (EIA) Indexed Site

    B. U.S. Transformer Outages by Type and NERC region, 2013 Outage Type Eastern Interconnection TRE WECC Contiguous U.S. Circuit Outage Counts Automatic Outages (Sustained) 59.00 --...

  19. SAS Output

    Gasoline and Diesel Fuel Update (EIA)

    A. U.S. Transmission Circuit Outages by Type and NERC region, 2013 Outage Type FRCC MRO NPCC RFC SERC SPP TRE WECC Contiguous U.S. Circuit Outage Counts Automatic Outages...

  20. Outlook for Refinery Outages and Available Refinery Capacity...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of gasoline and distillate, and to include a more detailed consideration of the impact of unexpected outages on product supplies. This report reviews the potential...

  1. Outlook for Refinery Outages and Available Refinery Capacity...

    Annual Energy Outlook [U.S. Energy Information Administration (EIA)]

    level of refinery outages outlined in this report. This report does not consider the impacts of refined product logistics and distribution, which could affect the movement of...

  2. Homeowners: Respond to Power Outages | Department of Energy

    Broader source: Energy.gov (indexed) [DOE]

    Power Outages After a disaster, electric utilities and government officials will first work to restore power to critical infrastructure like power plants and transmission lines, ...

  3. A Review of Power Outages and Restoration Following the June...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    August 2012 A Review of Power Outages and Restoration Following the June 2012 Derecho Infrastructure Security and Energy Restoration Office of Electricity Delivery and Energy ...

  4. Fermi 2: Independent safety assessment of refueling outage

    SciTech Connect (OSTI)

    Arora, H.O. [Detroit Edison, MI (United States)

    1994-12-31

    Industry experience and studies conducted by the U.S. Nuclear Regulatory Commission (NRC) have shown that plants are susceptible to a variety of events that can challenge safety during shutdowns. While these events have neither posed nor indicated an undue risk to public health and safety, they do serve to underscore the importance of effective outage planning and control. The NUMARC 91-06 guidelines suggest that proper planning and execution of outage activities can reduce the likelihood and consequences of events, which ultimately enhances safety during shutdown. The Fermi 2, Independent Safety Engineering Group (ISEG) is charged with the independent safety review of the refueling outage plan and its implementation. The ISEG is responsible for performing a detailed and critical review of proposed outage plan prior to the start of the outage, maintaining surveillance of the adequacy and consistency of the {open_quotes}defense-in-depth{close_quotes} provided during the outage, reviewing the outage plan changes for potential vulnerabilities that could affect safety functions, and investigating selected events that emerge during the course of the outage.

  5. A stochastic model for the measurement of electricity outage costs

    SciTech Connect (OSTI)

    Grosfeld-Nir, A.; Tishler, A. (Tel Aviv Univ. (Israel))

    1993-01-01

    The measurement of customer outage costs has recently become an important subject of research for electric utilities. This paper uses a stochastic dynamic model as the starting point in developing a market-based method for the evaluation of outage costs. Specifically, the model postulates that once an electricity outage occurs, all production activity stops. Full production is resumed once the electricity outage is over. This process repeats itself indefinitely. The business customer maximizes his expected discounted profits (the expected value of the firm), taking into account his limited ability to respond to repeated random electricity outages. The model is applied to 11 industrial branches in Israel. The estimates exhibit a large variation across branches. 34 refs., 3 tabs.

  6. Analysis of scrams and forced outages at boiling water reactors

    SciTech Connect (OSTI)

    Earle, R. T.; Sullivan, W. P.; Miller, K. R.; Schwegman, W. J.

    1980-07-01

    This report documents the results of a study of scrams and forced outages at General Electric Boiling Water Reactors (BWRs) operating in the United States. This study was conducted for Sandia Laboratories under a Light Water Reactor Safety Program which it manages for the United States Department of Energy. Operating plant data were used to identify the causes of scrams and forced outages. Causes of scrams and forced outages have been summarized as a function of operating plant and plant age and also ranked according to the number of events per year, outage time per year, and outage time per event. From this ranking, identified potential improvement opportunities were evaluated to determine the associated benefits and impact on plant availability.

  7. Technology Integration Initiative In Support of Outage Management

    SciTech Connect (OSTI)

    Gregory Weatherby; David Gertman

    2012-07-01

    Plant outage management is a high priority concern for the nuclear industry from cost and safety perspectives. Often, command and control during outages is maintained in the outage control center where many of the underlying technologies supporting outage control are the same as those used in the 1980’s. This research reports on the use of advanced integrating software technologies and hand held mobile devices as a means by which to reduce cycle time, improve accuracy, and enhance transparency among outage team members. This paper reports on the first phase of research supported by the DOE Light Water Reactor Sustainability (LWRS) Program that is performed in close collaboration with industry to examine the introduction of newly available technology allowing for safe and efficient outage performance. It is thought that this research will result in: improved resource management among various plant stakeholder groups, reduced paper work, and enhanced overall situation awareness for the outage control center management team. A description of field data collection methods, including personnel interview data, success factors, end-user evaluation and integration of hand held devices in achieving an integrated design are also evaluated. Finally, the necessity of obtaining operations cooperation support in field studies and technology evaluation is acknowledged.

  8. U.S. - Canada Power System Outage Task Force: Final Report on...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    - Canada Power System Outage Task Force: Final Report on the Implementation of Task Force Recommendations U.S. - Canada Power System Outage Task Force: Final Report on the ...

  9. Preparing for a Power Outage | Department of Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of September, FEMA's website Ready.gov will focus on a different emergency scenario and share tips on how to be prepared in case of floods, wildfires, hurricanes or power outages. ...

  10. SAMPLE RESULTS FROM MCU SOLIDS OUTAGE

    SciTech Connect (OSTI)

    Peters, T.; Washington, A.; Oji, L.; Coleman, C.; Poirier, M.

    2014-09-22

    Savannah River National Laboratory (SRNL) has received several solid and liquid samples from MCU in an effort to understand and recover from the system outage starting on April 6, 2014. SRNL concludes that the presence of solids in the Salt Solution Feed Tank (SSFT) is the likely root cause for the outage, based upon the following discoveries ? A solids sample from the extraction contactor #1 proved to be mostly sodium oxalate ? A solids sample from the scrub contactor#1 proved to be mostly sodium oxalate ? A solids sample from the Salt Solution Feed Tank (SSFT) proved to be mostly sodium oxalate ? An archived sample from Tank 49H taken last year was shown to contain a fine precipitate of sodium oxalate ? A solids sample from the extraction contactor #1 drain pipe from extraction contactor#1 proved to be mostly sodium aluminosilicate ? A liquid sample from the SSFT was shown to have elevated levels of oxalate anion compared to the expected concentration in the feed Visual inspection of the SSFT indicated the presence of precipitated or transferred solids, which were likely also in the Salt Solution Receipt Tank (SSRT). The presence of the solids coupled with agitation performed to maintain feed temperature resulted in oxalate solids migration through the MCU system and caused hydraulic issues that resulted in unplanned phase carryover from the extraction into the scrub, and ultimately the strip contactors. Not only did this carryover result in the Strip Effluent (SE) being pushed out of waste acceptance specification, but it resulted in the deposition of solids into several of the contactors. At the same time, extensive deposits of aluminosilicates were found in the drain tube in the extraction contactor #1. However it is not known at this time how the aluminosilicate solids are related to the oxalate solids. The solids were successfully cleaned out of the MCU system. However, future consideration must be given to the exclusion of oxalate solids into the MCU system

  11. Development of Methodologies for Technology Deployment for Advanced Outage Control Centers that Improve Outage Coordination, Problem Resolution and Outage Risk Management

    SciTech Connect (OSTI)

    Shawn St. Germain; Ronald Farris; Heather Medeman

    2013-09-01

    This research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which is a research and development (R&D) program sponsored by Department of Energy (DOE) and performed in close collaboration with industry R&D programs that provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. The LWRS program serves to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. The long term viability of existing nuclear power plants in the U.S. will depend upon maintaining high capacity factors, avoiding nuclear safety issues and reducing operating costs. The slow progress in the construction on new nuclear power plants has placed in increased importance on maintaining the output of the current fleet of nuclear power plants. Recently expanded natural gas production has placed increased economic pressure on nuclear power plants due to lower cost competition. Until recently, power uprate projects had steadily increased the total output of the U.S. nuclear fleet. Errors made during power plant upgrade projects have now removed three nuclear power plants from the U.S. fleet and economic considerations have caused the permanent shutdown of a fourth plant. Additionally, several utilities have cancelled power uprate projects citing economic concerns. For the past several years net electrical generation from U.S. nuclear power plants has been declining. One of few remaining areas where significant improvements in plant capacity factors can be made is in minimizing the duration of refueling outages. Managing nuclear power plant outages is a complex and difficult task. Due to the large number of complex tasks and the uncertainty that accompanies them, outage durations routinely exceed the planned duration. The ability to complete an outage on or near

  12. SAS Output

    U.S. Energy Information Administration (EIA) Indexed Site

    B. U.S. Transformer Sustained Automatic Outage Counts and Hours by High-Voltage Size and NERC Region, 2013 Sustained Automatic Outage Counts High-Side Voltage (kV) Eastern...

  13. SAS Output

    U.S. Energy Information Administration (EIA) Indexed Site

    A. U.S. Transmission Circuit Sustained Automatic Outage Counts and Hours by High-Voltage Size and NERC Region, 2013 Sustained Automatic Outage Counts Voltage Region Type Operating...

  14. Multiplicity Counting

    SciTech Connect (OSTI)

    Geist, William H.

    2015-12-01

    This set of slides begins by giving background and a review of neutron counting; three attributes of a verification item are discussed: 240Pueff mass; α, the ratio of (α,n) neutrons to spontaneous fission neutrons; and leakage multiplication. It then takes up neutron detector systems – theory & concepts (coincidence counting, moderation, die-away time); detector systems – some important details (deadtime, corrections); introduction to multiplicity counting; multiplicity electronics and example distributions; singles, doubles, and triples from measured multiplicity distributions; and the point model: multiplicity mathematics.

  15. Overview of Common Mode Outages in Power Systems

    SciTech Connect (OSTI)

    Papic, Milorad; Awodele , Kehinde; Billinton, Roy; Dent, Chris; Eager, Dan; Hamoud, Gomaa; Jirutitijaroen, Panida; Kumbale, Murali; Mitra, Joydeep; Samaan, Nader A.; Schneider, Alex; Singh, Chanan

    2012-11-10

    This paper is a result of ongoing activity carried out by Probability Applications for Common Mode Events (PACME) Task Force under the Reliability Risk and Probability Applications (RRPA) Subcommittee. The paper is intended to constitute a valid source of information and references about dealing with common-mode outages in power systems reliability analysis. This effort involves reviewing published literature and presenting state-of-the-art research and practical applications in the area of common-mode outages. Evaluation of available outage statistics show that there is a definite need for collective effort from academia and industry to not only recommended procedures for data collection and monitoring but also to provide appropriate mathematical models to assess such events.

  16. Hopper compilers and DDT short outage next Wed, May 16

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    compilers and DDT short outage next Wed, May 16 Hopper compilers and DDT short outage next Wed, May 16 May 10, 2012 Due to a scheduled maintenance for the License Servers, most of the compilers (except GNU) and the DDT debugger on Hopper will not be available from 10:30 am to 12:30 pm on Wednesday, May 16. If there are any questions or concerns, please contact "consult at nersc dot gov". Subscribe via RSS Subscribe Browse by Date February 2013 September 2012 August 2012 May 2012 April

  17. Hopper scheduled maintenance tomorrow (Sept 19) and /project outage

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    scheduled maintenance tomorrow (Sept 19) and /project outage Hopper scheduled maintenance tomorrow (Sept 19) and /project outage September 18, 2012 by Helen He There will be a scheduled hardware and software maintenance for Hopper next Wednesday, Sept 19, from 6:30 am to midnight Pacific time. Please plan your work accordingly and check the NERSC Message of the Day (MOTD) for status update: http://www.nersc.gov/live-status/motd/. The /project file system (also known as /global/project) will be

  18. Assessment of Critical Events Corridors through Multivariate Cascading Outages Analysis

    SciTech Connect (OSTI)

    Makarov, Yuri V.; Samaan, Nader A.; Diao, Ruisheng; Kumbale, Murali; Chen, Yousu; Singh, Ruchi; Green, Irina; Morgan, Mark P.

    2011-10-17

    Massive blackouts of electrical power systems in North America over the past decade has focused increasing attention upon ways to identify and simulate network events that may potentially lead to widespread network collapse. This paper summarizes a method to simulate power-system vulnerability to cascading failures to a supplied set of initiating events synonymously termed as Extreme Events. The implemented simulation method is currently confined to simulating steady state power-system response to a set of extreme events. The outlined method of simulation is meant to augment and provide a new insight into bulk power transmission network planning that at present remains mainly confined to maintaining power system security for single and double component outages under a number of projected future network operating conditions. Although one of the aims of this paper is to demonstrate the feasibility of simulating network vulnerability to cascading outages, a more important goal has been to determine vulnerable parts of the network that may potentially be strengthened in practice so as to mitigate system susceptibility to cascading failures. This paper proposes to demonstrate a systematic approach to analyze extreme events and identify vulnerable system elements that may be contributing to cascading outages. The hypothesis of critical events corridors is proposed to represent repeating sequential outages that can occur in the system for multiple initiating events. The new concept helps to identify system reinforcements that planners could engineer in order to 'break' the critical events sequences and therefore lessen the likelihood of cascading outages. This hypothesis has been successfully validated with a California power system model.

  19. Notification of Planned 230kV Outage at Potomac River Generating...

    Energy Savers [EERE]

    Subject: Notification of Planned 230kV Outage at Potomac River Generating Station To ... The next planned outage on 23106 high voltage circuit between xxxxxxx xx xxxx and ...

  20. A framework and review of customer outage costs: Integration and analysis of electric utility outage cost surveys

    SciTech Connect (OSTI)

    Lawton, Leora; Sullivan, Michael; Van Liere, Kent; Katz, Aaron; Eto, Joseph

    2003-11-01

    A clear understanding of the monetary value that customers place on reliability and the factors that give rise to higher and lower values is an essential tool in determining investment in the grid. The recent National Transmission Grid Study recognizes the need for this information as one of growing importance for both public and private decision makers. In response, the U.S. Department of Energy has undertaken this study, as a first step toward addressing the current absence of consistent data needed to support better estimates of the economic value of electricity reliability. Twenty-four studies, conducted by eight electric utilities between 1989 and 2002 representing residential and commercial/industrial (small, medium and large) customer groups, were chosen for analysis. The studies cover virtually all of the Southeast, most of the western United States, including California, rural Washington and Oregon, and the Midwest south and east of Chicago. All variables were standardized to a consistent metric and dollar amounts were adjusted to the 2002 CPI. The data were then incorporated into a meta-database in which each outage scenario (e.g., the lost of electric service for one hour on a weekday summer afternoon) is treated as an independent case or record both to permit comparisons between outage characteristics and to increase the statistical power of analysis results. Unadjusted average outage costs and Tobit models that estimate customer damage functions are presented. The customer damage functions express customer outage costs for a given outage scenario and customer class as a function of location, time of day, consumption, and business type. One can use the damage functions to calculate outage costs for specific customer types. For example, using the customer damage functions, the cost experienced by an ''average'' customer resulting from a 1 hour summer afternoon outage is estimated to be approximately $3 for a residential customer, $1,200 for small

  1. Risk Assessment of Cascading Outages: Methodologies and Challenges

    SciTech Connect (OSTI)

    Vaiman, Marianna; Bell, Keith; Chen, Yousu; Chowdhury, Badrul; Dobson, Ian; Hines, Paul; Papic, Milorad; Miller, Stephen; Zhang, Pei

    2012-05-31

    Abstract- This paper is a result of ongoing activity carried out by Understanding, Prediction, Mitigation and Restoration of Cascading Failures Task Force under IEEE Computer Analytical Methods Subcommittee (CAMS). The task force's previous papers are focused on general aspects of cascading outages such as understanding, prediction, prevention and restoration from cascading failures. This is the first of two new papers, which extend this previous work to summarize the state of the art in cascading failure risk analysis methodologies and modeling tools. This paper is intended to be a reference document to summarize the state of the art in the methodologies for performing risk assessment of cascading outages caused by some initiating event(s). A risk assessment should cover the entire potential chain of cascades starting with the initiating event(s) and ending with some final condition(s). However, this is a difficult task and heuristic approaches and approximations have been suggested. This paper discusses different approaches to this and suggests directions for future development of methodologies. The second paper summarizes the state of the art in modeling tools for risk assessment of cascading outages.

  2. Survey of tools for risk assessment of cascading outages

    SciTech Connect (OSTI)

    Papic, Milorad; Bell, Keith; Chen, Yousu; Dobson, Ian; Fonte, Louis; Haq, Enamul; Hines, Paul; Kirschen, Daniel; Luo, Xiaochuan; Miller, Stephen; Samaan, Nader A.; Vaiman, Marianna; Varghese, Matthew; Zhang, Pei

    2011-10-01

    Abstract-This paper is a result of ongoing activity carried out by Understanding, Prediction, Mitigation and Restoration of Cascading Failures Task Force under IEEE Computer Analytical Methods Subcommittee (CAMS). The task force's previous papers [1, 2] are focused on general aspects of cascading outages such as understanding, prediction, prevention and restoration from cascading failures. This is the second of two new papers, which extend this previous work to summarize the state of the art in cascading failure risk analysis methodologies and modeling tools. The first paper reviews the state of the art in methodologies for performing risk assessment of potential cascading outages [3]. This paper describes the state of the art in cascading failure modeling tools, documenting the view of experts representing utilities, universities and consulting companies. The paper is intended to constitute a valid source of information and references about presently available tools that deal with prediction of cascading failure events. This effort involves reviewing published literature and other documentation from vendors, universities and research institutions. The assessment of cascading outages risk evaluation is in continuous evolution. Investigations to gain even better understanding and identification of cascading events are the subject of several research programs underway aimed at solving the complexity of these events that electrical utilities face today. Assessing the risk of cascading failure events in planning and operation for power transmission systems require adequate mathematical tools/software.

  3. AUTOMATIC HAND COUNTER

    DOE Patents [OSTI]

    Mann J.R.; Wainwright, A.E.

    1963-06-11

    An automatic, personnel-operated, alpha-particle hand monitor is described which functions as a qualitative instrument to indicate to the person using it whether his hands are cold'' or hot.'' The monitor is activated by a push button and includes several capacitor-triggered thyratron tubes. Upon release of the push button, the monitor starts the counting of the radiation present on the hands of the person. If the count of the radiation exceeds a predetermined level within a predetermined time, then a capacitor will trigger a first thyratron tube to light a hot'' lamp. If, however, the count is below such level during this time period, another capacitor will fire a second thyratron to light a safe'' lamp. (AEC)

  4. Design Concepts for an Outage Control Center Information Dashboard

    SciTech Connect (OSTI)

    Hugo, Jacques Victor; St Germain, Shawn Walter; Thompson, Cheradan Jo; Whitesides, McKenzie Jo; Farris, Ronald Keith

    2015-12-01

    The nuclear industry, and the business world in general, is facing a rapidly increasing amount of data to be dealt with on a daily basis. In the last two decades, the steady improvement of data storage devices and means to create and collect data along the way influenced the manner in which we deal with information. Most data is still stored without filtering and refinement for later use. Many functions at a nuclear power plant generate vast amounts of data, with scheduled and unscheduled outages being a prime example of a source of some of the most complex data sets at the plant. To make matters worse, modern information and communications technology is making it possible to collect and store data faster than our ability to use it for making decisions. However, in most applications, especially outages, raw data has no value in itself; instead, managers, engineers and other specialists want to extract the information contained in it. The complexity and sheer volume of data could lead to information overload, resulting in getting lost in data that may be irrelevant to the task at hand, processed in an inappropriate way, or presented in an ineffective way. To prevent information overload, many data sources are ignored so production opportunities are lost because utilities lack the ability to deal with the enormous data volumes properly. Decision-makers are often confronted with large amounts of disparate, conflicting and dynamic information, which are available from multiple heterogeneous sources. Information and communication technologies alone will not solve this problem. Utilities need effective methods to exploit and use the hidden opportunities and knowledge residing in unexplored data resources. Superior performance before, during and after outages depends upon the right information being available at the right time to the right people. Acquisition of raw data is the easy part; instead, it is the ability to use advanced analytical, data processing and data

  5. Study, outlines why outages go long, short, or on-time

    SciTech Connect (OSTI)

    Not Available

    1993-09-01

    A recent report by a nuclear industry professional, based on a survey of outage managers at US nuclear power plants, declares that [open quotes]preplanned outage schedules appear to be grossly inaccurate, and the outage management planners and schedulers do not have a grasp of the requirements and/or the resources needed to complete the actual activities on schedule.[close quotes] It declares that [open quotes]the scheduled duration of a planned outage must be realistic.[close quotes] The study identifies personnel, planning and scheduling, and equipment/hardware as [open quotes]the primary reasons why refueling outages and outage activities finished ahead of, right on, or behind schedule.[close quotes

  6. Risk Assessment of Cascading Outages: Part I - Overview of Methodologies

    SciTech Connect (OSTI)

    Vaiman, Marianna; Bell, Keith; Chen, Yousu; Chowdhury, Badrul; Dobson, Ian; Hines, Paul; Papic, Milorad; Miller, Stephen; Zhang, Pei

    2011-07-31

    This paper is a result of ongoing activity carried out by Understanding, Prediction, Mitigation and Restoration of Cascading Failures Task Force under IEEE Computer Analytical Methods Subcommittee (CAMS). The task force's previous papers are focused on general aspects of cascading outages such as understanding, prediction, prevention and restoration from cascading failures. This is the first of two new papers, which will extend this previous work to summarize the state of the art in cascading failure risk analysis methodologies and modeling tools. This paper is intended to be a reference document to summarize the state of the art in the methodologies for performing risk assessment of cascading outages caused by some initiating event(s). A risk assessment should cover the entire potential chain of cascades starting with the initiating event(s) and ending with some final condition(s). However, this is a difficult task and heuristic approaches and approximations have been suggested. This paper discusses diffeent approaches to this and suggests directions for future development of methodologies.

  7. A Study of Outage Management Practices at Selected U.S. Nuclear Plants

    SciTech Connect (OSTI)

    Lin, James C.

    2002-07-01

    This paper presents insights gained from a study of the outage management practices at a number of U.S. nuclear plants. The objective of the study was to conduct an in-depth review of the current practices of outage management at these selected plants and identify important factors that have contributed to the recent success of their outage performance. Two BWR-4, three BWR-6, and two 3-loop Westinghouse PWR plants were selected for this survey. The results of this study can be used to formulate outage improvement efforts for nuclear plants in other countries. (author)

  8. Further Notice of 230kV Circuit Planned Outages | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Further Notice of 230kV Circuit Planned Outages Further Notice of 230kV Circuit Planned Outages Docket No. EO-05-01. Order No. 202-05-03: Pursuant 10 the United States Department of Energy "DOE") Order No. 102-05-3, issued December 20, 2005 ("DOE Potomac River Order''), Pepco hereby files this Further Notice Of 230kV Circuit Planned Outages serving the Potomac River Substation, and through thaI station, the District of Columbia. Further Notice of 230kV Circuit Planned Outages

  9. Status of U.S. Nuclear Outages - U.S. Energy Information Administratio...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    < NUCLEAR & URANIUM Data Status of U.S. Nuclear Outages Download Download Link to: Nuclear Regulatory Commission's Power Reactor Status Report

  10. Procedures and equipment for shortening refueling outages in Babcock and Wilcox PWRs. Final report

    SciTech Connect (OSTI)

    Baker, H.A.; Carr, C.W.

    1985-04-01

    New refueling equipment and procedures - plus a software package bid specification for outage management - can reduce refueling outages in Babcock and Wilcox PWRs. At Duke Power Company's Oconee nuclear station, a single modification in the fuel-handling system cut 5 days off the refueling schedule.

  11. Quantitative evaluation of savings in outage costs by using emergency actions strategy

    SciTech Connect (OSTI)

    Akhtar, A.; Asuhaimi, A.; Shaibon, H. [Univ. Teknologi Malaysia, Johor Bharu (Malaysia); Lo, K.L. [Univ. of Strathclyde, Glasgow (United Kingdom)

    1995-12-31

    This paper presents the results of a study carried out to assess the savings in consumer outage costs that can be accrued as a result of implementing Emergency Actions Strategy. The use of Emergency Actions Strategy plays a significant role in curtailing the consumer outage costs ensuing from unreliable electric service. In order to calculate the savings in outage costs, the probabilistic framework of the frequency and duration method has been used in conjunction with emergency actions. At first, the outage costs of various consumer sectors are estimated without considering the emergency actions. Secondly, the consumer outage costs are calculated by combining the frequency and duration method, and unserved energy with the emergency actions invoked. The results of the savings in consumer outage costs that can be accrued by utilizing Emergency Actions Strategy are presented for a synthetic system. The results of the study show that substantial savings in consumer outage costs are obtained by devising and implementing emergency actions strategy in situations of capacity outages. The results are of particular relevance and utility to the underdeveloped and developing countries where capacity shortages occur quite frequently. These results also suggest the importance of emergency actions strategy for electric utilities in reducing the consumer economic losses arising from unreliable electric service.

  12. A Review of Power Outages and Restoration Following the June 2012 Derecho

    Broader source: Energy.gov [DOE]

    The Office of Electricity Delivery and Energy Reliability has released a report that reviews power outages and restoration efforts following the June 29, 2012 Derecho and compares them to outages and restoration efforts following other spring and summer storms in the Ohio Valley and Mid-Atlantic regions.

  13. Use of collaboration software to improve nuclear power plant outage management

    SciTech Connect (OSTI)

    Germain, Shawn

    2015-02-01

    Nuclear Power Plant (NPP) refueling outages create some of the most challenging activities the utilities face in both tracking and coordinating thousands of activities in a short period of time. Other challenges, including nuclear safety concerns arising from atypical system configurations and resource allocation issues, can create delays and schedule overruns, driving up outage costs. Today the majority of the outage communication is done using processes that do not take advantage of advances in modern technologies that enable enhanced communication, collaboration and information sharing. Some of the common practices include: runners that deliver paper-based requests for approval, radios, telephones, desktop computers, daily schedule printouts, and static whiteboards that are used to display information. Many gains have been made to reduce the challenges facing outage coordinators; however; new opportunities can be realized by utilizing modern technological advancements in communication and information tools that can enhance the collective situational awareness of plant personnel leading to improved decision-making. Ongoing research as part of the Light Water Reactor Sustainability Program (LWRS) has been targeting NPP outage improvement. As part of this research, various applications of collaborative software have been demonstrated through pilot project utility partnerships. Collaboration software can be utilized as part of the larger concept of Computer-Supported Cooperative Work (CSCW). Collaborative software can be used for emergent issue resolution, Outage Control Center (OCC) displays, and schedule monitoring. Use of collaboration software enables outage staff and subject matter experts (SMEs) to view and update critical outage information from any location on site or off.

  14. Job Counting Guidelines

    Office of Environmental Management (EM)

    Definitions and Guidelines for Counting Monthly and Quarterly EM Recovery Act Full Time Equivalents (FTEs) and Cumulative Head-Count The following updated definitions and...

  15. Smart Grid Week: Hurricane Season and the Department’s Efforts to Make the Grid More Resilient to Power Outages

    Broader source: Energy.gov [DOE]

    Next up in our Smart Grid Week series -- improving electric grid technologies to adequately prepare for emergencies with power outages.

  16. Notice of Unplanned Outage at the Mirant Potomac River Plant | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy Unplanned Outage at the Mirant Potomac River Plant Notice of Unplanned Outage at the Mirant Potomac River Plant Docket No. EO-05-01. Order No. 202-05-03: Pursuant to the United States Department of Energy ("DOE") Order No_ 202-05-3, issued December 20, 2005 ("DOE Potomac River Order"), Pepco hereby files this notice of an unplanned outage of one of the 230kV circuits serving the Potomac River Substation, and through that station, the District of Columbia. Notice

  17. Notification of Planned 230kV Outage at Potomac River Generating Station |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy The next planned outage on xxxxx high voltage circuit between Palmers Corner Substation and the Potomac River Generating Station is scheduled for Sunday, June 3, 2007 and will begin at 4:00 AM with a scheduled return date of Saturday, June 9, 2007 at 2:00 PM. Notification of Planned 230kV Outage at Potomac River Generating Station (34.76 KB) More Documents & Publications Notification of Planned 230kV Outage at Potomac River Generating Station Notification of Planned

  18. Hoboken Hopes To Reduce Power Outages With New 'Smart Grid' System

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... HOBOKEN, N.J. (CBSNewYork) - Officials are hoping to reduce power outages during future storms in Hoboken by designing a "smart grid" system using military-inspired technology. The ...

  19. Notification of Planned 230kV Outage at Potomac River Generating Station

    Office of Energy Efficiency and Renewable Energy (EERE)

    Docket No. EO-05-01. The next planned outage on xxxxx high voltage circuit between xxxxx and xxxxx is tentatively scheduled for Saturday May 19, 2007 and will begin at 4:00 AM with a scheduled...

  20. Notification of Planned 230kV Outage at Potomac River Generating...

    Office of Environmental Management (EM)

    Re: Potomac River Generating Station Department of Energy, Case No. EO-05-01: Potomac Electric Power Company (PEPCO) evised plan for transmission outages for the 230 kV circuits ...

  1. Notification of Planned 230kV Outage at Potomac River Generating...

    Energy Savers [EERE]

    Re: Potomac River Generating Station Department of Energy Case No. EO-05-01: Advanced Notice of Power Outages. Special Environmental Analysis For Actions Taken under U.S. ...

  2. Pepco Update on Current Construction Work and Mirant Generation Needs for Pepco's Planned June Line Outage

    Office of Energy Efficiency and Renewable Energy (EERE)

    Docket No. EO-05-01.  Pepco needs the following to occur to provide necessary reliability to the central D.C. area during this scheduled June outage in order to complete installation of new...

  3. U.S. - Canada Power System Outage Task Force: Final Report on the

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Implementation of Task Force Recommendations | Department of Energy - Canada Power System Outage Task Force: Final Report on the Implementation of Task Force Recommendations U.S. - Canada Power System Outage Task Force: Final Report on the Implementation of Task Force Recommendations On August 14, 2003, the largest power blackout in North American history affected an area with an estimated 50 million people and 61,800 megawatts (MW) of electric load in the states of Ohio, Michigan,

  4. OTRA-THS MAC to reduce Power Outage Data Collection Latency in a smart meter network

    SciTech Connect (OSTI)

    Garlapati, Shravan K; Kuruganti, Phani Teja; Buehrer, Richard M; Reed, Jeffrey H

    2014-01-01

    The deployment of advanced metering infrastructure by the electric utilities poses unique communication challenges, particularly as the number of meters per aggregator increases. During a power outage, a smart meter tries to report it instantaneously to the electric utility. In a densely populated residential/industrial locality, it is possible that a large number of smart meters simultaneously try to get access to the communication network to report the power outage. If the number of smart meters is very high of the order of tens of thousands (metropolitan areas), the power outage data flooding can lead to Random Access CHannel (RACH) congestion. Several utilities are considering the use of cellular network for smart meter communications. In 3G/4G cellular networks, RACH congestion not only leads to collisions, retransmissions and increased RACH delays, but also has the potential to disrupt the dedicated traffic flow by increasing the interference levels (3G CDMA). In order to overcome this problem, in this paper we propose a Time Hierarchical Scheme (THS) that reduces the intensity of power outage data flooding and power outage reporting delay by 6/7th, and 17/18th when compared to their respective values without THS. Also, we propose an Optimum Transmission Rate Adaptive (OTRA) MAC to optimize the latency in power outage data collection. The analysis and simulation results presented in this paper show that both the OTRA and THS features of the proposed MAC results in a Power Outage Data Collection Latency (PODCL) that is 1/10th of the 4G LTE PODCL.

  5. Status Report on the Development of Micro-Scheduling Software for the Advanced Outage Control Center Project

    SciTech Connect (OSTI)

    Shawn St. Germain; Kenneth Thomas; Ronald Farris; Jeffrey Joe

    2014-09-01

    The long-term viability of existing nuclear power plants (NPPs) in the United States (U.S.) is dependent upon a number of factors, including maintaining high capacity factors, maintaining nuclear safety, and reducing operating costs, particularly those associated with refueling outages. Refueling outages typically take 20-30 days, and for existing light water NPPs in the U.S., the reactor cannot be in operation during the outage. Furthermore, given that many NPPs generate between $1-1.5 million/day in revenue when in operation, there is considerable interest in shortening the length of refueling outages. Yet, refueling outages are highly complex operations, involving multiple concurrent and dependent activities that are difficult to coordinate. Finding ways to improve refueling outage performance while maintaining nuclear safety has proven to be difficult. The Advanced Outage Control Center project is a research and development (R&D) demonstration activity under the Light Water Reactor Sustainability (LWRS) Program. LWRS is a R&D program which works with industry R&D programs to establish technical foundations for the licensing and managing of long-term, safe, and economical operation of current NPPs. The Advanced Outage Control Center project has the goal of improving the management of commercial NPP refueling outages. To accomplish this goal, this INL R&D project is developing an advanced outage control center (OCC) that is specifically designed to maximize the usefulness of communication and collaboration technologies for outage coordination and problem resolution activities. This report describes specific recent efforts to develop a capability called outage Micro-Scheduling. Micro-Scheduling is the ability to allocate and schedule outage support task resources on a sub-hour basis. Micro-Scheduling is the real-time fine-tuning of the outage schedule to react to the actual progress of the primary outage activities to ensure that support task resources are

  6. Evaluation of the marginal outage costs in interconnected and composite power systems

    SciTech Connect (OSTI)

    Ghajar, R.; Billinton, R.

    1995-05-01

    The structure of electric utilities is undergoing dramatic changes as new and expanded service options are added. The concepts of unbundling the electric service and offering customers a range of new services that more closely track actual costs are expanding the options open to customers. Spot pricing provides the economic structure for many of these new service options. An important component of spot prices is the marginal outage cost incurred by customers due to an incremental change in load. This paper presents a formalized approach of calculating the marginal outage cost in interconnected generating systems and composite generation and transmission systems using quantitative reliability techniques. The effects of selected pertinent factors on the marginal outage cost in composite systems are also presented. The proposed methods are illustrated by application to the IEEE-Reliability Test System (IEEE-RTS).

  7. Nuclear Safety Risk Management in Refueling Outage of Qinshan Nuclear Power Plant

    SciTech Connect (OSTI)

    Meijing Wu; Guozhang Shen

    2006-07-01

    The NPP is used to planning maintenance, in-service inspection, surveillance test, fuel handling and design modification in the refueling outage; the operator response capability will be reduced plus some of the plant systems out of service or loss of power at this time. Based on 8 times refueling outage experiences of the Qinshan NPP, this article provide some good practice and lesson learned for the nuclear safety risk management focus at four safety function areas of Residual Heat Removal Capability, Inventory Control, Power availability and Reactivity control. (authors)

  8. Pepco Update on Current Construction Work and Mirant Generation Needs for Pepco's Planned June Line Outage

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    May 25, 2007 Kevin Kolevar Director of the Office of Electricity Deliverability and Energy Reliability Department of Energy 1000 Independence Ave., SW Washington, DC 20585 Dear Mr. Kolevar, DOE has requested that Pepco provide an update on the current work to install two new 230 kilovolt circuits into Potomac River substation and to evaluate the need for generation from the Potomac River plant to support the anticipated line outage during June, 2007. An outage on one of the 230 kV circuits is

  9. Application of Hybrid Geo-Spatially Granular Fragility Curves to Improve Power Outage Predictions

    SciTech Connect (OSTI)

    Fernandez, Steven J; Allen, Melissa R; Omitaomu, Olufemi A; Walker, Kimberly A

    2014-01-01

    Fragility curves depict the relationship between a weather variable (wind speed, gust speed, ice accumulation, precipitation rate) and the observed outages for a targeted infrastructure network. This paper describes an empirical study of the county by county distribution of power outages and one minute weather variables during Hurricane Irene with the objective of comparing 1) as built fragility curves (statistical approach) to engineering as designed (bottom up) fragility curves for skill in forecasting outages during future hurricanes; 2) county specific fragility curves to find examples of significant deviation from average behavior; and 3) the engineering practices of outlier counties to suggest future engineering studies of robustness. Outages in more than 90% of the impacted counties could be anticipated through an average or generic fragility curve. The remaining counties could be identified and handled as exceptions through geographic data sets. The counties with increased or decreased robustness were characterized by terrain more or less susceptible to persistent flooding in areas where above ground poles located their foundations. Land use characteristics of the area served by the power distribution system can suggest trends in the as built power grid vulnerabilities to extreme weather events that would be subjects for site specific studies.

  10. Blackout 2003: Energy Secretary Bodman and Minister of Natural Resources for Canada Lunn Release the 2003 Power Outage Final Report, October 3, 2006

    Broader source: Energy.gov [DOE]

    Energy Secretary Bodman and Minister of Natural Resources for Canada Lunn Release the 2003 Power Outage Final Report. In accordance with the mandate of the U.S.-Canada Power Outage Task Force, the...

  11. Fast counting electronics for neutron coincidence counting

    DOE Patents [OSTI]

    Swansen, J.E.

    1985-03-05

    An amplifier-discriminator is tailored to output a very short pulse upon an above-threshold input from a detector which may be a /sup 3/He detector. The short pulse output is stretched and energizes a light emitting diode (LED) to provide a visual output of operation and pulse detection. The short pulse is further fed to a digital section for processing and possible ORing with other like generated pulses. Finally, the output (or ORed output) is fed to a derandomizing buffer which converts the rapidly and randomly occurring pulses into synchronized and periodically spaced-apart pulses for the accurate counting thereof. Provision is also made for the internal and external disabling of each individual channel of amplifier-discriminators in an ORed plurality of same.

  12. Fast counting electronics for neutron coincidence counting

    DOE Patents [OSTI]

    Swansen, James E.

    1987-01-01

    An amplifier-discriminator is tailored to output a very short pulse upon an above-threshold input from a detector which may be a .sup.3 He detector. The short pulse output is stretched and energizes a light emitting diode (LED) to provide a visual output of operation and pulse detection. The short pulse is further fed to a digital section for processing and possible ORing with other like generated pulses. Finally, the output (or ORed output ) is fed to a derandomizing buffer which converts the rapidly and randomly occurring pulses into synchronized and periodically spaced-apart pulses for the accurate counting thereof. Provision is also made for the internal and external disabling of each individual channel of amplifier-discriminators in an ORed plurality of same.

  13. Method for estimating power outages and restoration during natural and man-made events

    DOE Patents [OSTI]

    Omitaomu, Olufemi A.; Fernandez, Steven J.

    2016-01-05

    A method of modeling electric supply and demand with a data processor in combination with a recordable medium, and for estimating spatial distribution of electric power outages and affected populations. A geographic area is divided into cells to form a matrix. Within the matrix, supply cells are identified as containing electric substations and demand cells are identified as including electricity customers. Demand cells of the matrix are associated with the supply cells as a function of the capacity of each of the supply cells and the proximity and/or electricity demand of each of the demand cells. The method includes estimating a power outage by applying disaster event prediction information to the matrix, and estimating power restoration using the supply and demand cell information of the matrix and standardized and historical restoration information.

  14. Recent Performance of and Plasma Outage Studies with the SNS H- Source

    SciTech Connect (OSTI)

    Stockli, Martin P; Han, Baoxi; Murray Jr, S N; Pennisi, Terry R; Piller, Chip; Santana, Manuel; Welton, Robert F

    2016-01-01

    SNS ramps to higher power levels that can be sustained with high availability. The goal is 1.4 MW despite a compromised RFQ, which requires higher RF power than design levels to approach the nominal beam transmission. Unfortunately at higher power the RFQ often loses its thermal stability, a problem apparently enhanced by beam losses and high influxes of hydrogen. Delivering as much H- beam as possible with the least amount of hydrogen led to plasma outages. The root cause is the dense 1-ms long ~55-kW 2-MHz plasma pulses reflecting ~90% of the continuous ~300W, 13-MHz power, which was mitigated with a 4-ms filter for the reflected power signal and an outage resistant, slightly-detuned 13-MHz match. Lowering the H2 also increased the H- beam current to ~55 mA, and increased the transmission by ~7%.

  15. LOW ENERGY COUNTING CHAMBERS

    DOE Patents [OSTI]

    Hayes, P.M.

    1960-02-16

    A beta particle counter adapted to use an end window made of polyethylene terephthalate was designed. The extreme thinness of the film results in a correspondingly high transmission of incident low-energy beta particles by the window. As a consequence, the counting efficiency of the present counter is over 40% greater than counters using conventional mica end windows.

  16. Final Remediation Report for the K-Area Bingham Pump Outage Pit (643-1G)

    SciTech Connect (OSTI)

    Morganstern, M.

    2002-06-18

    The K-Area Bingham Pump Outage Pit (K BPOP) Building Number 643-1G, is situated immediately south and outside the K-Reactor fence line and is approximately 400 feet in length and 60 feet in width. For the K BPOP operable unit, the Land Use Control (LUC) objectives are to prevent contact, removal, or excavation of buried waste in the area and to preclude residential use of the area.

  17. Energy Secretary Bodman and Minister of Natural Resources for Canada Lunn Release the 2003 Power Outage Final Report

    Broader source: Energy.gov [DOE]

    WASHINGTON, D.C. - U.S. Department of Energy Secretary Samuel W. Bodman and Minister of Natural Resources for Canada Gary Lunn, today released the final report on the power outage that affected 50...

  18. Energy Secretary Bodman and Minister of Natural Resources for Canada Lunn Release the 2003 Power Outage Final Report

    Broader source: Energy.gov [DOE]

    U.S. Department of Energy Secretary Samuel W. Bodman and Minister of Natural Resources for Canada Gary Lunn, today released the final report on the power outage that affected 50 million North Americans in August 2003.

  19. Automatically processed alpha-track radon monitor

    DOE Patents [OSTI]

    Langner, G.H. Jr.

    1993-01-12

    An automatically processed alpha-track radon monitor is provided which includes a housing having an aperture allowing radon entry, and a filter that excludes the entry of radon daughters into the housing. A flexible track registration material is located within the housing that records alpha-particle emissions from the decay of radon and radon daughters inside the housing. The flexible track registration material is capable of being spliced such that the registration material from a plurality of monitors can be spliced into a single strip to facilitate automatic processing of the registration material from the plurality of monitors. A process for the automatic counting of radon registered by a radon monitor is also provided.

  20. Methodology to predict the number of forced outages due to creep failure

    SciTech Connect (OSTI)

    Palermo, J.V. Jr.

    1996-12-31

    All alloy metals at a temperature above 950 degrees Fahrenheit experience creep damage. Creep failures in boiler tubes usually begin after 25 to 40 years of operation. Since creep damage is irreversible, the only remedy is to replace the tube sections. By predicting the number of failures per year, the utility can make the best economic decision concerning tube replacement. This paper describes a methodology to calculate the number of forced outages per yea due to creep failures. This methodology is particularly useful to utilities that have boilers that have at least 25 years of operation.

  1. Analytical Tools to Predict Distribution Outage Restoration Load. Final Project Report.

    SciTech Connect (OSTI)

    Law, John

    1994-11-14

    The main activity of this project has been twofold: (1) development of a computer model to predict CLPU(Cold Load Pickup) and (2) development of a field measurement and analysis method to obtain the input parameters of the CLPU model. The field measurement and analysis method is called the Step-Voltage-Test (STEPV). The Kootenai Electric Cooperative Appleway 51 feeder in Coeur d`Alene was selected for analysis in this project and STEPV tests were performed in winters of 92 and 93. The STEPV data was analyzed (method and results presented within this report) to obtain the Appleway 51 feeder parameters for prediction by the CLPU model. One only CLPU record was obtained in winter 1994. Unfortunately, the actual CLPU was not dramatic (short outage and moderate temperature) and did not display cyclic restoration current. A predicted Appleway 51 feeder CLPU was generated using the parameters obtained via the STEPV measurement/analysis/algorithm method at the same ambient temperature and outage duration as the measured actual CLPU. The predicted CLPU corresponds reasonably well with the single actual CLPU data obtained in winter 1994 on the Appleway 51 feeder.

  2. Automatic Differentiation Package

    Energy Science and Technology Software Center (OSTI)

    2007-03-01

    Sacado is an automatic differentiation package for C++ codes using operator overloading and C++ templating. Sacado provide forward, reverse, and Taylor polynomial automatic differentiation classes and utilities for incorporating these classes into C++ codes. Users can compute derivatives of computations arising in engineering and scientific applications, including nonlinear equation solving, time integration, sensitivity analysis, stability analysis, optimization and uncertainity quantification.

  3. Reducing Duration of Refueling Outage by Optimizing Core Design and Shuffling Sequence

    SciTech Connect (OSTI)

    Wakker, P.H.; Verhagen, F.C.M.; Bloois, J.T. van; Sutton, W.R. III

    2005-07-15

    Reducing the duration of refueling outage is possible by optimizing the core design and the shuffling sequence. For both options software tools have been developed that have been applied to the three most recent cycles of the Borssele plant in the Netherlands. Applicability of the shuffling sequence optimization to boiling water reactors has been demonstrated by a comparison to a recent shuffle plan used in the Hatch plant located in the United States. Their uses have shown that both core design and shuffling sequence optimization can be exploited to reduce the time needed for reloading a core with an in-core shuffling scheme. Ex-core shuffling schemes for pressurized water reactors can still have substantial benefit from a core design using a minimized number of insert shuffles.

  4. Automatic switching matrix

    DOE Patents [OSTI]

    Schlecht, Martin F.; Kassakian, John G.; Caloggero, Anthony J.; Rhodes, Bruce; Otten, David; Rasmussen, Neil

    1982-01-01

    An automatic switching matrix that includes an apertured matrix board containing a matrix of wires that can be interconnected at each aperture. Each aperture has associated therewith a conductive pin which, when fully inserted into the associated aperture, effects electrical connection between the wires within that particular aperture. Means is provided for automatically inserting the pins in a determined pattern and for removing all the pins to permit other interconnecting patterns.

  5. Understanding the Benefits of Dispersed Grid-Connected Photovoltaics: From Avoiding the Next Major Outage to Taming Wholesale Power Markets

    SciTech Connect (OSTI)

    Letendre, Steven E.; Perez, Richard

    2006-07-15

    Thanks to new solar resource assessment techniques using cloud cover data available from geostationary satellites, it is apparent that grid-connected PV installations can serve to enhance electric grid reliability, preventing or hastening recovery from major power outages and serving to mitigate extreme price spikes in wholesale energy markets. (author)

  6. Olkiluoto 1 and 2 - Plant efficiency improvement and lifetime extension-project (PELE) implemented during outages 2010 and 2011

    SciTech Connect (OSTI)

    Kosonen, M.; Hakola, M.

    2012-07-01

    Teollisuuden Voima Oyj (TVO) is a non-listed public company founded in 1969 to produce electricity for its stakeholders. TVO is the operator of the Olkiluoto nuclear power plant. TVO follows the principle of continuous improvement in the operation and maintenance of the Olkiluoto plant units. The PELE project (Plant Efficiency Improvement and Lifetime Extension), mainly completed during the annual outages in 2010 and 2011, and forms one part of the systematic development of Olkiluoto units. TVO maintains a long-term development program that aims at systematically modernizing the plant unit systems and equipment based on the latest technology. According to the program, the Olkiluoto 1 and Olkiluoto 2 plant units are constantly renovated with the intention of keeping them safe and reliable, The aim of the modernization projects is to improve the safety, reliability, and performance of the plant units. PELE project at Olkiluoto 1 was done in 2010 and at Olkiluoto 2 in 2011. The outage length of Olkiluoto 1 was 26 d 12 h 4 min and Olkiluoto 2 outage length was 28 d 23 h 46 min. (Normal service-outage is about 14 days including refueling and refueling-outage length is about seven days. See figure 1) The PELE project consisted of several single projects collected into one for coordinated project management. Some of the main projects were as follows: - Low pressure turbines: rotor, stator vane, casing and turbine instrumentation replacement. - Replacement of Condenser Cooling Water (later called seawater pumps) pumps - Replacement of inner isolation valves on the main steam lines. - Generator and the generator cooling system replacement. - Low voltage switchgear replacement. This project will continue during future outages. PELE was a success. 100 TVO employees and 1500 subcontractor employees participated in the project. The execution of the PELE projects went extremely well during the outages. The replacement of the low pressure turbines and seawater pumps improved the

  7. Property:EditCount | Open Energy Information

    Open Energy Info (EERE)

    EditCount Jump to: navigation, search Property Name EditCount Property Type Number Description Number of user edits. Pages using the property "EditCount" Showing 25 pages using...

  8. Plant Outage Time Savings Provided by Subcritical Physics Testing at Vogtle Unit 2

    SciTech Connect (OSTI)

    Cupp, Philip [Southern Nuclear Company (United States); Heibel, M.D. [Westinghouse Electric Company, LLC (United States)

    2006-07-01

    The most recent core reload design verification physics testing done at Southern Nuclear Company's (SNC) Vogtle Unit 2, performed prior to initial power operations in operating cycle 12, was successfully completed while the reactor was at least 1% {delta}K/K subcritical. The testing program used was the first application of the Subcritical Physics Testing (SPT) program developed by the Westinghouse Electric Company LLC. The SPT program centers on the application of the Westinghouse Subcritical Rod Worth Measurement (SRWM) methodology that was developed in cooperation with the Vogtle Reactor Engineering staff. The SRWM methodology received U. S. Nuclear Regulatory Commission (NRC) approval in August of 2005. The first application of the SPT program occurred at Vogtle Unit 2 in October of 2005. The results of the core design verification measurements obtained during the SPT program demonstrated excellent agreement with prediction, demonstrating that the predicted core characteristics were in excellent agreement with the actual operating characteristics of the core. This paper presents an overview of the SPT Program used at Vogtle Unit 2 during operating cycle 12, and a discussion of the critical path outage time savings the SPT program is capable of providing. (authors)

  9. Risk-based evaluation of Allowed Outage Times (AOTs) considering risk of shutdown

    SciTech Connect (OSTI)

    Mankamo, T.; Kim, I.S.; Samanta, P.K.

    1992-12-31

    When safety systems fail during power operation, Technical Specifications (TS) usually limit the repair within Allowed Outage Time (AOT). If the repair cannot be completed within the AOT, or no AOT is allowed, the plant is required to be shut down for the repair. However, if the capability to remove decay heat is degraded, shutting down the plant with the need to operate the affected decay-heat removal systems may impose a substantial risk compared to continued power operation over a usual repair time. Thus, defining a proper AOT in such situations can be considered as a risk-comparison between the repair in frill power state with a temporarily increased level of risk, and the altemative of shutting down the plant for the repair in zero power state with a specific associated risk. The methodology of the risk-comparison approach, with a due consideration of the shutdown risk, has been further developed and applied to the AOT considerations of residual heat removal and standby service water systems of a boiling water reactor (BWR) plant. Based on the completed work, several improvements to the TS requirements for the systems studied can be suggested.

  10. Risk-based evaluation of Allowed Outage Times (AOTs) considering risk of shutdown

    SciTech Connect (OSTI)

    Mankamo, T. ); Kim, I.S.; Samanta, P.K. )

    1992-01-01

    When safety systems fail during power operation, Technical Specifications (TS) usually limit the repair within Allowed Outage Time (AOT). If the repair cannot be completed within the AOT, or no AOT is allowed, the plant is required to be shut down for the repair. However, if the capability to remove decay heat is degraded, shutting down the plant with the need to operate the affected decay-heat removal systems may impose a substantial risk compared to continued power operation over a usual repair time. Thus, defining a proper AOT in such situations can be considered as a risk-comparison between the repair in frill power state with a temporarily increased level of risk, and the altemative of shutting down the plant for the repair in zero power state with a specific associated risk. The methodology of the risk-comparison approach, with a due consideration of the shutdown risk, has been further developed and applied to the AOT considerations of residual heat removal and standby service water systems of a boiling water reactor (BWR) plant. Based on the completed work, several improvements to the TS requirements for the systems studied can be suggested.

  11. Reactor component automatic grapple

    DOE Patents [OSTI]

    Greenaway, Paul R.

    1982-01-01

    A grapple for handling nuclear reactor components in a medium such as liquid sodium which, upon proper seating and alignment of the grapple with the component as sensed by a mechanical logic integral to the grapple, automatically seizes the component. The mechanical logic system also precludes seizure in the absence of proper seating and alignment.

  12. Automatic sweep circuit

    DOE Patents [OSTI]

    Keefe, Donald J.

    1980-01-01

    An automatically sweeping circuit for searching for an evoked response in an output signal in time with respect to a trigger input. Digital counters are used to activate a detector at precise intervals, and monitoring is repeated for statistical accuracy. If the response is not found then a different time window is examined until the signal is found.

  13. AUTOmatic Message PACKing Facility

    Energy Science and Technology Software Center (OSTI)

    2004-07-01

    AUTOPACK is a library that provides several useful features for programs using the Message Passing Interface (MPI). Features included are: 1. automatic message packing facility 2. management of send and receive requests. 3. management of message buffer memory. 4. determination of the number of anticipated messages from a set of arbitrary sends, and 5. deterministic message delivery for testing purposes.

  14. Short-Term Energy Outlook Supplement: 2015 Outlook for Gulf of Mexico Hurricane-Related Production Outages

    Gasoline and Diesel Fuel Update (EIA)

    4 Outlook for Gulf of Mexico Hurricane-Related Production Outages June 2014 Independent Statistics & Analysis www.eia.gov U.S. Department of Energy Washington, DC 20585 U.S. Energy Information Administration | STEO Supplement: 2014 Hurricane Outlook i This report was prepared by the U.S. Energy Information Administration (EIA), the statistical and analytical agency within the U.S. Department of Energy. By law, EIA's data, analyses, and forecasts are independent of approval by any other

  15. Short-Term Energy Outlook Supplement: 2013 Outlook for Gulf of Mexico Hurricane-Related Production Outages

    U.S. Energy Information Administration (EIA) Indexed Site

    3 Outlook for Gulf of Mexico Hurricane-Related Production Outages June 2013 Independent Statistics & Analysis www.eia.gov U.S. Department of Energy Washington, DC 20585 U.S. Energy Information Administration | STEO Supplement: 2013 Hurricane Outlook i This report was prepared by the U.S. Energy Information Administration (EIA), the statistical and analytical agency within the U.S. Department of Energy. By law, EIA's data, analyses, and forecasts are independent of approval by any other

  16. LINEAR COUNT-RATE METER

    DOE Patents [OSTI]

    Henry, J.J.

    1961-09-01

    A linear count-rate meter is designed to provide a highly linear output while receiving counting rates from one cycle per second to 100,000 cycles per second. Input pulses enter a linear discriminator and then are fed to a trigger circuit which produces positive pulses of uniform width and amplitude. The trigger circuit is connected to a one-shot multivibrator. The multivibrator output pulses have a selected width. Feedback means are provided for preventing transistor saturation in the multivibrator which improves the rise and decay times of the output pulses. The multivibrator is connected to a diode-switched, constant current metering circuit. A selected constant current is switched to an averaging circuit for each pulse received, and for a time determined by the received pulse width. The average output meter current is proportional to the product of the counting rate, the constant current, and the multivibrator output pulse width.

  17. Automatic Fault Classification

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Automatic Fault Classification of Photovoltaic Strings Based on an In Situ IV Characterization System and a Gaussian Process Algorithm. C. Birk Jones ∗ , Manel Mart´ ınez-Ram´ on ‡ , § Ryan Smith † , Craig K. Carmignani ∗ , Olga Lavrova ∗ , Charles Robinson ∗ , and Joshua S. Stein ∗ ∗ Sandia National Laboratories Solar PV & Grid Integration, Albuquerque, NM, USA. ‡ Department of Electrical and Computer Engineering, University of New Mexico, Albuquerque, NM, USA, §

  18. Automatic range selector

    DOE Patents [OSTI]

    McNeilly, Clyde E.

    1977-01-04

    A device is provided for automatically selecting from a plurality of ranges of a scale of values to which a meter may be made responsive, that range which encompasses the value of an unknown parameter. A meter relay indicates whether the unknown is of greater or lesser value than the range to which the meter is then responsive. The rotatable part of a stepping relay is rotated in one direction or the other in response to the indication from the meter relay. Various positions of the rotatable part are associated with particular scales. Switching means are sensitive to the position of the rotatable part to couple the associated range to the meter.

  19. AUTOMATIC FREQUENCY CONTROL SYSTEM

    DOE Patents [OSTI]

    Hansen, C.F.; Salisbury, J.D.

    1961-01-10

    A control is described for automatically matching the frequency of a resonant cavity to that of a driving oscillator. The driving oscillator is disconnected from the cavity and a secondary oscillator is actuated in which the cavity is the frequency determining element. A low frequency is mixed with the output of the driving oscillator and the resultant lower and upper sidebands are separately derived. The frequencies of the sidebands are compared with the secondary oscillator frequency. deriving a servo control signal to adjust a tuning element in the cavity and matching the cavity frequency to that of the driving oscillator. The driving oscillator may then be connected to the cavity.

  20. Automatic readout micrometer

    DOE Patents [OSTI]

    Lauritzen, Ted

    1982-01-01

    A measuring system is disclosed for surveying and very accurately positioning objects with respect to a reference line. A principal use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse or fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  1. Automatic readout micrometer

    DOE Patents [OSTI]

    Lauritzen, T.

    A measuring system is described for surveying and very accurately positioning objects with respect to a reference line. A principle use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse of fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  2. Count-doubling time safety circuit

    DOE Patents [OSTI]

    Rusch, Gordon K.; Keefe, Donald J.; McDowell, William P.

    1981-01-01

    There is provided a nuclear reactor count-factor-increase time monitoring circuit which includes a pulse-type neutron detector, and means for counting the number of detected pulses during specific time periods. Counts are compared and the comparison is utilized to develop a reactor scram signal, if necessary.

  3. Automatic UT inspection of economizer at TVA`s Paradise plant

    SciTech Connect (OSTI)

    Brophy, J.W.; Chang, P.

    1995-12-31

    In March 1995, Tennessee Valley Authority (TVA) and Southwest Research Institute (SwRi) conducted testing of a multi-element ultrasonic probe designed to inspect economizer tubing in the Paradise power plant during the spring outage. This evaluation was to determine general loss of wall thickness due to erosion/corrosion and preferential inside diameter (ID) corrosion at butt welds in straight sections of the tube. The erosion/corrosion wall loss occurs during service while the butt weld corrosion occurs out-of-service when water collects in the weld groove during outages and results in localized pitting in the weld groove. The ultrasonic (UT) probe was designed to acquire thickness measurements from the ID of the economizer tubes and to be accurate, very rapid UT inspection. To attain a high rate of speed inside the tubes, an eight-element circular array of transducers were designed into the probe head. Thickness data and location data are collected automatically by a portable computer.

  4. Low Background Counting At SNOLAB

    SciTech Connect (OSTI)

    Lawson, Ian; Cleveland, Bruce [SNOLAB, 1039 Regional Rd 24, Lively, ON P3Y 1N2 (Canada)

    2011-04-27

    It is a continuous and ongoing effort to maintain radioactivity in materials and in the environment surrounding most underground experiments at very low levels. These low levels are required so that experiments can achieve the required detection sensitivities for the detection of low-energy neutrinos, searches for dark matter and neutrinoless double-beta decay. SNOLAB has several facilities which are used to determine these low background levels in the materials and the underground environment. This proceedings will describe the SNOLAB High Purity Germanium Detector which has been in continuous use for the past five years and give results of many of the items that have been counted over that period. Brief descriptions of SNOLAB's alpha-beta and electrostatic counters will be given, and the radon levels at SNOLAB will be discussed.

  5. Well coincidence counting and analysis

    SciTech Connect (OSTI)

    Lu, Ming-Shih; Teichmann, T.; Ceo, R.N.; Collins, L.L.

    1994-03-01

    In several recent papers a physical/mathematical model was developed to describe the nuclear multiplicative processes in samples containing fissile material from a general statistical viewpoint, starting with the basic underlying physical phenomena. The results of this model agreed with the established picture used in ``standard`` HLNCC (High Level Neutron Coincidence Counter) measurements, but considerably extended them, and allowed a more detailed interpretation of the underlying physical mechanisms and of the higher moments of the neutron counts. The present paper examines some recent measurements made at Y-12 (Oak Ridge) using the AWCC, in the light of this model. The results show internal consistency under a variety of conditions, and give good agreement between experiment and theory.

  6. Clothes Dryer Automatic Termination Evaluation

    SciTech Connect (OSTI)

    TeGrotenhuis, Ward E.

    2014-10-01

    Volume 2: Improved Sensor and Control Designs Many residential clothes dryers on the market today provide automatic cycles that are intended to stop when the clothes are dry, as determined by the final remaining moisture content (RMC). However, testing of automatic termination cycles has shown that many dryers are susceptible to over-drying of loads, leading to excess energy consumption. In particular, tests performed using the DOE Test Procedure in Appendix D2 of 10 CFR 430 subpart B have shown that as much as 62% of the energy used in a cycle may be from over-drying. Volume 1 of this report shows an average of 20% excess energy from over-drying when running automatic cycles with various load compositions and dryer settings. Consequently, improving automatic termination sensors and algorithms has the potential for substantial energy savings in the U.S.

  7. Automatic safety rod for reactors

    DOE Patents [OSTI]

    Germer, John H.

    1988-01-01

    An automatic safety rod for a nuclear reactor containing neutron absorbing material and designed to be inserted into a reactor core after a loss-of-core flow. Actuation is based upon either a sudden decrease in core pressure drop or the pressure drop decreases below a predetermined minimum value. The automatic control rod includes a pressure regulating device whereby a controlled decrease in operating pressure due to reduced coolant flow does not cause the rod to drop into the core.

  8. Anomalous liquid scintillation counting of chromium-51

    SciTech Connect (OSTI)

    Charig, A.; Blake-Haskins, J.; Eigen, E.

    1985-12-01

    Unusual behavior of chromium-51 in liquid scintillation cocktail is described. Rapidly declining count rate is attributed to first-order binding of chromate to glass vials.

  9. Low Background Counting at LBNL

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Smith, A. R.; Thomas, K. J.; Norman, E. B.; Chan, Y. D.; Lesko, K. T.; Hurley, D. L.

    2015-03-24

    The Low Background Facility (LBF) at Lawrence Berkeley National Laboratory in Berkeley, California provides low background gamma spectroscopy services to a wide array of experiments and projects. The analysis of samples takes place within two unique facilities; locally within a carefully-constructed, low background cave and remotely at an underground location that historically has operated underground in Oroville, CA, but has recently been relocated to the Sanford Underground Research Facility (SURF) in Lead, SD. These facilities provide a variety of gamma spectroscopy services to low background experiments primarily in the form of passive material screening for primordial radioisotopes (U, Th, K)more » or common cosmogenic/anthropogenic products, as well as active screening via Neutron Activation Analysis for specific applications. The LBF also provides hosting services for general R&D testing in low background environments on the surface or underground for background testing of detector systems or similar prototyping. A general overview of the facilities, services, and sensitivities is presented. Recent activities and upgrades will also be presented, such as the completion of a 3π anticoincidence shield at the surface station and environmental monitoring of Fukushima fallout. The LBF is open to any users for counting services or collaboration on a wide variety of experiments and projects.« less

  10. Low Background Counting at LBNL

    SciTech Connect (OSTI)

    Smith, A. R.; Thomas, K. J.; Norman, E. B.; Chan, Y. D.; Lesko, K. T.; Hurley, D. L.

    2015-03-24

    The Low Background Facility (LBF) at Lawrence Berkeley National Laboratory in Berkeley, California provides low background gamma spectroscopy services to a wide array of experiments and projects. The analysis of samples takes place within two unique facilities; locally within a carefully-constructed, low background cave and remotely at an underground location that historically has operated underground in Oroville, CA, but has recently been relocated to the Sanford Underground Research Facility (SURF) in Lead, SD. These facilities provide a variety of gamma spectroscopy services to low background experiments primarily in the form of passive material screening for primordial radioisotopes (U, Th, K) or common cosmogenic/anthropogenic products, as well as active screening via Neutron Activation Analysis for specific applications. The LBF also provides hosting services for general R&D testing in low background environments on the surface or underground for background testing of detector systems or similar prototyping. A general overview of the facilities, services, and sensitivities is presented. Recent activities and upgrades will also be presented, such as the completion of a 3π anticoincidence shield at the surface station and environmental monitoring of Fukushima fallout. The LBF is open to any users for counting services or collaboration on a wide variety of experiments and projects.

  11. Automatic rapid attachable warhead section

    DOE Patents [OSTI]

    Trennel, Anthony J.

    1994-05-10

    Disclosed are a method and apparatus for (1) automatically selecting warheads or reentry vehicles from a storage area containing a plurality of types of warheads or reentry vehicles, (2) automatically selecting weapon carriers from a storage area containing at least one type of weapon carrier, (3) manipulating and aligning the selected warheads or reentry vehicles and weapon carriers, and (4) automatically coupling the warheads or reentry vehicles with the weapon carriers such that coupling of improperly selected warheads or reentry vehicles with weapon carriers is inhibited. Such inhibition enhances safety of operations and is achieved by a number of means including computer control of the process of selection and coupling and use of connectorless interfaces capable of assuring that improperly selected items will be rejected or rendered inoperable prior to coupling. Also disclosed are a method and apparatus wherein the stated principles pertaining to selection, coupling and inhibition are extended to apply to any item-to-be-carried and any carrying assembly.

  12. Automatic rapid attachable warhead section

    DOE Patents [OSTI]

    Trennel, A.J.

    1994-05-10

    Disclosed are a method and apparatus for automatically selecting warheads or reentry vehicles from a storage area containing a plurality of types of warheads or reentry vehicles, automatically selecting weapon carriers from a storage area containing at least one type of weapon carrier, manipulating and aligning the selected warheads or reentry vehicles and weapon carriers, and automatically coupling the warheads or reentry vehicles with the weapon carriers such that coupling of improperly selected warheads or reentry vehicles with weapon carriers is inhibited. Such inhibition enhances safety of operations and is achieved by a number of means including computer control of the process of selection and coupling and use of connectorless interfaces capable of assuring that improperly selected items will be rejected or rendered inoperable prior to coupling. Also disclosed are a method and apparatus wherein the stated principles pertaining to selection, coupling and inhibition are extended to apply to any item-to-be-carried and any carrying assembly. 10 figures.

  13. Time Variant Floating Mean Counting Algorithm

    Energy Science and Technology Software Center (OSTI)

    1999-06-03

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.

  14. The impact of fuel cladding failure events on occupational radiation exposures at nuclear power plants: Case study, PWR (pressurized-water reactor) during an outage

    SciTech Connect (OSTI)

    Moeller, M.P.; Martin, G.F.; Kenoyer, J.L.

    1987-08-01

    This report is the second in a series of case studies designed to evaluate the magnitude of increase in occupational radiation exposures at commercial US nuclear power plants resulting from small incidents or abnormal events. The event evaluated is fuel cladding failure, which can result in elevated primary coolant activity and increased radiation exposure rates within a plant. For this case study, radiation measurements were made at a pressurized-water reactor (PWR) during a maintenance and refueling outage. The PWR had been operating for 22 months with fuel cladding failure characterized as 105 pin-hole leakers, the equivalent of 0.21% failed fuel. Gamma spectroscopy measurements, radiation exposure rate determinations, thermoluminescent dosimeter (TLD) assessments, and air sample analyses were made in the plant's radwaste, pipe penetration, and containment buildings. Based on the data collected, evaluations indicate that the relative contributions of activation products and fission products to the total exposure rates were constant over the duration of the outage. This constancy is due to the significant contribution from the longer-lived isotopes of cesium (a fission product) and cobalt (an activation product). For this reason, fuel cladding failure events remain as significant to occupational radiation exposure during an outage as during routine operations. As documented in the previous case study (NUREG/CR-4485 Vol. 1), fuel cladding failure events increased radiation exposure rates an estimated 540% at some locations of the plant during routine operations. Consequently, such events can result in significantly greater radiation exposure rates in many areas of the plant during the maintenance and refueling outages than would have been present under normal fuel conditions.

  15. FLOP Counts for "Small" Single-Node Miniapplication Tests

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    FLOP Counts for "Small" Single-Node Miniapplication Tests FLOP Counts for "Small" Single-Node Miniapplication Tests These data, obtained using the NERSC Hopper system, are provided...

  16. Automatic computation of transfer functions

    DOE Patents [OSTI]

    Atcitty, Stanley; Watson, Luke Dale

    2015-04-14

    Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

  17. Refinery Outages: Fall 2014

    Gasoline and Diesel Fuel Update (EIA)

    gasoline supply in a particular region because pipeline infrastructure, geography and marine shipping regulations constrain the amount of product that can flow among the different...

  18. Refinery Outages: Fall 2014

    Annual Energy Outlook [U.S. Energy Information Administration (EIA)]

    some Libyan crude oil production to the market, and increasing U.S. crude production. Economic growth in 2014 outside of the United States has been slow, and some recent data...

  19. Photon-counting solid-state photomultiplier

    SciTech Connect (OSTI)

    Petroff, M.D.; Stapelbroek, M.G.

    1989-02-01

    The Solid-State Photomultiplier is a silicon device capable of continuous detection of individual photons in the wave length range from 0.4 to 28 ..mu..m. Operated with an applied bias near 7 volts the device responds to the absorption of an incident photon with a submicrosecond-rise-time current pulse with a narrow amplitude distribution well above the electronic readout noise level. Optimal photon-counting performance occurs between 6 and 10 K and for count rates less than 10/sup 10/ counts/s per cm/sup 2/ of detector area. A 60% counting quantum efficiency has been demonstrated at 20 ..mu..m, and near 60% was observed in the visible light region. The underlying mechanism involves extremely fast internal charge amplification by impact ionization of impurity-band electrons and results in a pulse for each photoelectrically or thermally induced free carrier. The thermally induced dark pulse rate at 7 K is sufficiently low that background limited detector performance is obtained at a background of less than 10/sup 6/ photons/cm/sup 2/s.

  20. Differential white cell count by centrifugal microfluidics.

    SciTech Connect (OSTI)

    Sommer, Gregory Jon; Tentori, Augusto M.; Schaff, Ulrich Y.

    2010-07-01

    We present a method for counting white blood cells that is uniquely compatible with centrifugation based microfluidics. Blood is deposited on top of one or more layers of density media within a microfluidic disk. Spinning the disk causes the cell populations within whole blood to settle through the media, reaching an equilibrium based on the density of each cell type. Separation and fluorescence measurement of cell types stained with a DNA dye is demonstrated using this technique. The integrated signal from bands of fluorescent microspheres is shown to be proportional to their initial concentration in suspension. Among the current generation of medical diagnostics are devices based on the principle of centrifuging a CD sized disk functionalized with microfluidics. These portable 'lab on a disk' devices are capable of conducting multiple assays directly from a blood sample, embodied by platforms developed by Gyros, Samsung, and Abaxis. [1,2] However, no centrifugal platform to date includes a differential white blood cell count, which is an important metric complimentary to diagnostic assays. Measuring the differential white blood cell count (the relative fraction of granulocytes, lymphocytes, and monocytes) is a standard medical diagnostic technique useful for identifying sepsis, leukemia, AIDS, radiation exposure, and a host of other conditions that affect the immune system. Several methods exist for measuring the relative white blood cell count including flow cytometry, electrical impedance, and visual identification from a stained drop of blood under a microscope. However, none of these methods is easily incorporated into a centrifugal microfluidic diagnostic platform.

  1. Automatic Commercial Ice Makers | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Automatic Commercial Ice Makers Automatic Commercial Ice Makers The Department of Energy (DOE) develops standardized data templates for reporting the results of tests conducted in accordance with current DOE test procedures. Templates may be used by third-party laboratories under contract with DOE that conduct testing in support of ENERGY STAR® verification, DOE rulemakings, and enforcement of the federal energy conservation standards. Automatic Commercial Ice Makers -- v2.0 (111.62 KB) More

  2. Automatic Monitoring & Control of Polymer Reactions Development...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Development and Implementation of an Automatic Continuous Online Monitoring and Control Platform for Polymerization Reactions Enabling energy and resource efficien- cy in polymer ...

  3. Caldyne Automatics Limited | Open Energy Information

    Open Energy Info (EERE)

    storage product manufacturer; also makes lighting systems based on solar, wind and solar wind hybrid systems. References: Caldyne Automatics Limited1 This article is a stub....

  4. Automatic Mechetronic Wheel Light Device

    DOE Patents [OSTI]

    Khan, Mohammed John Fitzgerald

    2004-09-14

    A wheel lighting device for illuminating a wheel of a vehicle to increase safety and enhance aesthetics. The device produces the appearance of a "ring of light" on a vehicle's wheels as the vehicle moves. The "ring of light" can automatically change in color and/or brightness according to a vehicle's speed, acceleration, jerk, selection of transmission gears, and/or engine speed. The device provides auxiliary indicator lights by producing light in conjunction with a vehicle's turn signals, hazard lights, alarm systems, and etc. The device comprises a combination of mechanical and electronic components and can be placed on the outer or inner surface of a wheel or made integral to a wheel or wheel cover. The device can be configured for all vehicle types, and is electrically powered by a vehicle's electrical system and/or battery.

  5. Automatic insulation resistance testing apparatus

    DOE Patents [OSTI]

    Wyant, Francis J.; Nowlen, Steven P.; Luker, Spencer M.

    2005-06-14

    An apparatus and method for automatic measurement of insulation resistances of a multi-conductor cable. In one embodiment of the invention, the apparatus comprises a power supply source, an input measuring means, an output measuring means, a plurality of input relay controlled contacts, a plurality of output relay controlled contacts, a relay controller and a computer. In another embodiment of the invention the apparatus comprises a power supply source, an input measuring means, an output measuring means, an input switching unit, an output switching unit and a control unit/data logger. Embodiments of the apparatus of the invention may also incorporate cable fire testing means. The apparatus and methods of the present invention use either voltage or current for input and output measured variables.

  6. Automatic toilet seat lowering apparatus

    DOE Patents [OSTI]

    Guerty, Harold G.

    1994-09-06

    A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat. A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat.

  7. Alternative Fuels Data Center: Alternative Fueling Station Counts by State

    Alternative Fuels and Advanced Vehicles Data Center [Office of Energy Efficiency and Renewable Energy (EERE)]

    Locate Stations Printable Version Share this resource Send a link to Alternative Fuels Data Center: Alternative Fueling Station Counts by State to someone by E-mail Share Alternative Fuels Data Center: Alternative Fueling Station Counts by State on Facebook Tweet about Alternative Fuels Data Center: Alternative Fueling Station Counts by State on Twitter Bookmark Alternative Fuels Data Center: Alternative Fueling Station Counts by State on Google Bookmark Alternative Fuels Data Center:

  8. Active neutron multiplicity counting of bulk uranium

    SciTech Connect (OSTI)

    Ensslin, N.; Krick, M.S.; Langner, D.G.; Miller, M.C. )

    1991-01-01

    This paper describes a new nondestructive assay technique being developed to assay bulk uranium containing kilogram quantities of {sup 235}U. The new technique uses neutron multiplicity analysis of data collected with a coincidence counter outfitted with AmLi neutron sources. The authors have calculated the expected neutron multiplicity count rate and assay precision for this technique and will report on its expected performance as a function of detector design characteristics, {sup 235}U sample mass, AmLi source strength, and source-to-sample coupling.

  9. A Freely Available Matlab Script for Automatic Spatial Drift...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: A Freely Available Matlab Script for Automatic Spatial Drift Correction. Citation Details In-Document Search Title: A Freely Available Matlab Script for Automatic ...

  10. Water-Efficient Technology Opportunity: Sprinkler Automatic Shut...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Water-Efficient Technology Opportunity: Sprinkler Automatic Shut-Off Devices The Federal Energy Management Program (FEMP) identified sprinkler automatic shut-off devices as a water...

  11. Install an Automatic Blowdown Control System

    SciTech Connect (OSTI)

    Not Available

    2006-01-01

    This revised ITP steam tip sheet on installing automatic blowdown controls provide how-to advice for improving industrial steam systems using low-cost, proven practices and technologies.

  12. Low background counting techniques at SNOLAB

    SciTech Connect (OSTI)

    Lawson, Ian; Cleveland, Bruce [SNOLAB, 1039 Regional Rd 24, Lively, ON P3Y 1N2 (Canada)] [SNOLAB, 1039 Regional Rd 24, Lively, ON P3Y 1N2 (Canada)

    2013-08-08

    Many of the experiments currently searching for dark matter, studying properties of neutrinos or searching for neutrinoless double beta decay require very low levels of radioactive backgrounds both in their own construction materials and in the surrounding environment. These low background levels are required so that the experiments can achieve the required sensitivities for their searches. SNOLAB has several facilities which are used to directly measure these radioactive backgrounds. This proceedings will describe SNOLAB's High Purity Germanium Detectors, one of which has been in continuous use for the past seven years measuring materials for many experiments in operation or under construction at SNOLAB. A description of the characterisation of SNOLAB's new germanium well detector will be presented. In addition, brief descriptions of SNOLAB's alpha-beta and electrostatic counters will be presented and a description of SNOLAB's future low background counting laboratory will be given.

  13. Automatic safety rod for reactors. [LMFBR

    DOE Patents [OSTI]

    Germer, J.H.

    1982-03-23

    An automatic safety rod for a nuclear reactor containing neutron absorbing material and designed to be inserted into a reactor core after a loss-of-flow. Actuation is based upon either a sudden decrease in core pressure drop or the pressure drop decreases below a predetermined minimum value. The automatic control rod includes a pressure regulating device whereby a controlled decrease in operating pressure due to reduced coolant flow does not cause the rod to drop into the core.

  14. Proposed Plan for the R-Area Bingham Pump Outage Pits (643-8G, -9G, -10G) and R-Area Unknown Pits No.1, No.2, No.3 (RUNK-1, -2, -3)

    SciTech Connect (OSTI)

    Mundy, S.

    2002-07-31

    The purpose of this proposed plan is to describe the preferred remedial alternative for the R-Area Bingham Pump Outage Pits (R BPOPs) and the R-Area Unknowns (RUNKs) operable unit (OU) and to provide for public involvement in the decision-making process.

  15. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOE Patents [OSTI]

    Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.

    2015-12-01

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  16. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOE Patents [OSTI]

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  17. Optical People Counting for Demand Controlled Ventilation: A...

    Office of Scientific and Technical Information (OSTI)

    of Counter Performance Citation Details In-Document Search Title: Optical People Counting for Demand Controlled Ventilation: A Pilot Study of Counter Performance This pilot ...

  18. Counting small RNA in disease-causing organisms

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    in disease-causing organisms Los Alamos researchers demonstrated improved technical methods capable of directly counting small RNA molecules in pathogenic (disease-causing)...

  19. When DNA Needs to Stand Up and Be Counted

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    biomolecules must be properly oriented to perform their biological function. In other words, the DNA literally must stand up to be counted. Understanding both the attachment...

  20. First AID (Atom counting for Isotopic Determination).

    SciTech Connect (OSTI)

    Roach, J. L.; Israel, K. M.; Steiner, R. E.; Duffy, C. J.; Roench, F. R.

    2002-01-01

    Los Alamos National Laboratory (LANL) has established an in vitro bioassay monitoring program in compliance with the requirements in the Code of Federal Regulations, 10 CFR 835, Occupational Radiation Protection. One aspect of this program involves monitoring plutonium levels in at-risk workers. High-risk workers are monitored using the ultra-sensitive Therrnal Ionization Mass Spectrometry (TIMS) technique to ensure compliance with DOE standards. TIMS is used to measure atom ratios of 239Pua nd 240Puw ith respect to a tracer isotope ('Pu). These ratios are then used to calculate the amount of 239Pu and 240Pup resent. This low-level atom counting technique allows the calculation of the concentration levels of 239Pu and 240Pu in urine for at risk workers. From these concentration levels, dose assessments can be made and worker exposure levels can be monitored. Detection limits for TIMS analysis are on the order of millions of atoms, which translates to activity levels of 150 aCi 239Pua nd 500 aCi for 240Pu. pCi for Our poster presentation will discuss the ultra-sensitive, low-level analytical technique used to measure plutonium isotopes and the data verification methods used for validating isotopic measurements.

  1. Automatic visual inspection of hybrid microcircuits

    SciTech Connect (OSTI)

    Hines, R.E.

    1980-05-01

    An automatic visual inspection system using a minicomputer and a video digitizer was developed for inspecting hybrid microcircuits (HMC) and thin-film networks (TFN). The system performed well in detecting missing components on HMCs and reduced the testing time for each HMC by 75%.

  2. When DNA Needs to Stand Up and Be Counted

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    When DNA Needs to Stand Up and Be Counted When DNA Needs to Stand Up and Be Counted Print Wednesday, 31 May 2006 00:00 DNA microarrays are small metal, glass, or silicon chips covered with patterns of short single-stranded DNA (ssDNA). These "DNA chips" are revolutionizing biotechnology, allowing scientists to identify and count many DNA sequences simultaneously. They are the enabling technology for genomic-based medicine and are a critical component of advanced diagnostic systems for

  3. Galaxy number counts to second order and their bispectrum

    SciTech Connect (OSTI)

    Dio, Enea Di; Durrer, Ruth; Marozzi, Giovanni; Montanari, Francesco E-mail: Ruth.Durrer@unige.ch E-mail: Francesco.Montanari@unige.ch

    2014-12-01

    We determine the number counts to second order in cosmological perturbation theory in the Poisson gauge and allowing for anisotropic stress. The calculation is performed using an innovative approach based on the recently proposed ''geodesic light-cone'' gauge. This allows us to determine the number counts in a purely geometric way, without using Einstein's equation. The result is valid for general dark energy models and (most) modified gravity models. We then evaluate numerically some relevant contributions to the number counts bispectrum. In particular we consider the terms involving the density, redshift space distortion and lensing.

  4. Install an Automatic Blowdown-Control System | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    an Automatic Blowdown-Control System Install an Automatic Blowdown-Control System This steam tip sheet on installing automatic blowdown controls provide how-to advice for improving industrial steam systems using low-cost, proven practices and technologies. STEAM TIP SHEET #23 Install an Automatic Blowdown-Control System (January 2012) (408.53 KB) More Documents & Publications Minimize Boiler Blowdown Recover Heat from Boiler Blowdown Improving Steam System Performance: A Sourcebook for

  5. White House Council of Economic Advisers and Energy Department...

    Broader source: Energy.gov (indexed) [DOE]

    ... technologies that can quickly alert utilities when consumers experience a power outage or there is a system disruption and automatically reroute power to avoid further outages. ...

  6. Correlated neutron counting for the 21st century

    SciTech Connect (OSTI)

    Evans, Louise G

    2010-12-01

    Correlated neutron counting techniques, such as neutron coincidence and multiplicity counting, are widely employed at nuclear fuel cycle facilities for the accountancy of nuclear material such as plutonium. These techniques need to be improved and enhanced to meet the challenges of complex measurement items and future nuclear safeguards applications, for example; the non-destructive assay of spent nuclear fuel, high counting rate applications, small sample measurements, and Helium-3 replacement. At the same time simulation tools, used for the design of detection systems based on these techniques, are being developed in anticipation of future needs. This seminar will present the theory and current state of the practice of temporally correlated neutron counting. A range of future safeguards applications will then be presented in the context of research projects at Los Alamos National Laboratory.

  7. Multianode cylindrical proportional counter for high count rates

    DOE Patents [OSTI]

    Hanson, James A.; Kopp, Manfred K.

    1981-01-01

    A cylindrical, multiple-anode proportional counter is provided for counting of low-energy photons (<60 keV) at count rates of greater than 10.sup.5 counts/sec. A gas-filled proportional counter cylinder forming an outer cathode is provided with a central coaxially disposed inner cathode and a plurality of anode wires disposed in a cylindrical array in coaxial alignment with and between the inner and outer cathodes to form a virtual cylindrical anode coaxial with the inner and outer cathodes. The virtual cylindrical anode configuration improves the electron drift velocity by providing a more uniform field strength throughout the counter gas volume, thus decreasing the electron collection time following the detection of an ionizing event. This avoids pulse pile-up and coincidence losses at these high count rates. Conventional RC position encoding detection circuitry may be employed to extract the spatial information from the counter anodes.

  8. Multianode cylindrical proportional counter for high count rates

    DOE Patents [OSTI]

    Hanson, J.A.; Kopp, M.K.

    1980-05-23

    A cylindrical, multiple-anode proportional counter is provided for counting of low-energy photons (< 60 keV) at count rates of greater than 10/sup 5/ counts/sec. A gas-filled proportional counter cylinder forming an outer cathode is provided with a central coaxially disposed inner cathode and a plurality of anode wires disposed in a cylindrical array in coaxial alignment with and between the inner and outer cathodes to form a virtual cylindrical anode coaxial with the inner and outer cathodes. The virtual cylindrical anode configuration improves the electron drift velocity by providing a more uniform field strength throughout the counter gas volume, thus decreasing the electron collection time following the detection of an ionizing event. This avoids pulse pile-up and coincidence losses at these high count rates. Conventional RC position encoding detection circuitry may be employed to extract the spatial information from the counter anodes.

  9. Gathering total items count for pagination | OpenEI Community

    Open Energy Info (EERE)

    Gathering total items count for pagination Home > Groups > Utility Rate Hi I'm using the following base link plus some restrictions to sector, utility, and locations to poll for...

  10. When DNA Needs to Stand Up and Be Counted

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    stand up to be counted. Understanding both the attachment and orientation of DNA on gold surfaces was the goal of recent experiments performed at ALS Beamline 8.0.1 by an...

  11. Method for automatically scramming a nuclear reactor

    DOE Patents [OSTI]

    Ougouag, Abderrafi M.; Schultz, Richard R.; Terry, William K.

    2005-12-27

    An automatically scramming nuclear reactor system. One embodiment comprises a core having a coolant inlet end and a coolant outlet end. A cooling system operatively associated with the core provides coolant to the coolant inlet end and removes heated coolant from the coolant outlet end, thus maintaining a pressure differential therebetween during a normal operating condition of the nuclear reactor system. A guide tube is positioned within the core with a first end of the guide tube in fluid communication with the coolant inlet end of the core, and a second end of the guide tube in fluid communication with the coolant outlet end of the core. A control element is positioned within the guide tube and is movable therein between upper and lower positions, and automatically falls under the action of gravity to the lower position when the pressure differential drops below a safe pressure differential.

  12. When DNA Needs to Stand Up and Be Counted

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    When DNA Needs to Stand Up and Be Counted Print DNA microarrays are small metal, glass, or silicon chips covered with patterns of short single-stranded DNA (ssDNA). These "DNA chips" are revolutionizing biotechnology, allowing scientists to identify and count many DNA sequences simultaneously. They are the enabling technology for genomic-based medicine and are a critical component of advanced diagnostic systems for medical and homeland security applications. Like digital chips, DNA

  13. When DNA Needs to Stand Up and Be Counted

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    When DNA Needs to Stand Up and Be Counted Print DNA microarrays are small metal, glass, or silicon chips covered with patterns of short single-stranded DNA (ssDNA). These "DNA chips" are revolutionizing biotechnology, allowing scientists to identify and count many DNA sequences simultaneously. They are the enabling technology for genomic-based medicine and are a critical component of advanced diagnostic systems for medical and homeland security applications. Like digital chips, DNA

  14. Students Count -- From the Classroom to the Conference | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Count -- From the Classroom to the Conference Students Count -- From the Classroom to the Conference January 18, 2012 - 5:42pm Addthis Secretary Chu and former Governor of California Arnold Schwarzenegger speak with students at the 2011 Energy Innovation Summit. | Photo courtesy of ARPA-E. Secretary Chu and former Governor of California Arnold Schwarzenegger speak with students at the 2011 Energy Innovation Summit. | Photo courtesy of ARPA-E. Alexa McClanahan Communications Support

  15. When DNA Needs to Stand Up and Be Counted

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    When DNA Needs to Stand Up and Be Counted Print DNA microarrays are small metal, glass, or silicon chips covered with patterns of short single-stranded DNA (ssDNA). These "DNA chips" are revolutionizing biotechnology, allowing scientists to identify and count many DNA sequences simultaneously. They are the enabling technology for genomic-based medicine and are a critical component of advanced diagnostic systems for medical and homeland security applications. Like digital chips, DNA

  16. Counting Down to the Collegiate Wind Competition 2016 | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Counting Down to the Collegiate Wind Competition 2016 Counting Down to the Collegiate Wind Competition 2016 May 19, 2016 - 10:15am Addthis Competitors test their turbines in a wind tunnel at the Collegiate Wind Competition 2015, held at the National Renewable Energy Laboratory's National Wind Technology Center just south of Boulder, Colorado. (Photo by Dennis Schroeder / NREL) Competitors test their turbines in a wind tunnel at the Collegiate Wind Competition 2015, held at the

  17. Automatic detection of sweep-meshable volumes

    DOE Patents [OSTI]

    Tautges; Timothy J. , White; David R.

    2006-05-23

    A method of and software for automatically determining whether a mesh can be generated by sweeping for a representation of a geometric solid comprising: classifying surface mesh schemes for surfaces of the representation locally using surface vertex types; grouping mappable and submappable surfaces of the representation into chains; computing volume edge types for the representation; recursively traversing surfaces of the representation and grouping the surfaces into source, target, and linking surface lists; and checking traversal direction when traversing onto linking surfaces.

  18. Automatic targeting of plasma spray gun

    DOE Patents [OSTI]

    Abbatiello, Leonard A.; Neal, Richard E.

    1978-01-01

    A means for monitoring the material portion in the flame of a plasma spray gun during spraying operations is provided. A collimated detector, sensitive to certain wavelengths of light emission, is used to locate the centroid of the material with each pass of the gun. The response from the detector is then relayed to the gun controller to be used to automatically realign the gun.

  19. Automatic Energy Calibration of Gamma-Ray Spectrometers

    Energy Science and Technology Software Center (OSTI)

    2011-09-19

    The software provides automatic method for calibrating the energy scale of high-purity germanium (HPGe) and scintillation gamma-ray spectrometers, using natural background radiation as the source of calibration gamma rays. In field gamma-ray spectroscopy, radioactive check sources may not be available; temperature changes can shift detector electronic gain and scintillator light output; and a user’s experience and training may not include gamma-ray energy calibration. Hence, an automated method of calibrating the spectrometer using natural background wouldmore » simplify its operation, especially by technician-level users, and by enhancing spectroscopic data quality, it would reduce false detections. Following a typically one-minute count of background gamma-rays, the measured spectrum is searched for gamma-ray peaks, producing a list of peak centroids, in channels1. Next, the ratio algorithm attempts to match the peak centroids found in the search to a user-supplied list of calibration gamma-ray energies. Finally, if three or more calibration energies have been matched to peaks, the energy equation parameters are determined by a least-squares fit2, and the spectrum has been energy-calibrated. The ratio algorithm rests on the repeatable but irregular spacing of the background gammaray energies—together they form a unique set of ratios, when normalized to the highest energy calibration gamma ray; so too, the corresponding peak centroids in the spectrum. The algorithm matches energy ratios to peak centroid ratios, to determine which peak matches a given calibration energy.« less

  20. Refinery Outages: First Half 2015

    Gasoline and Diesel Fuel Update (EIA)

    to increase by 820,000 bbld in 2015. While global oil supply growth has been strong, economic growth outside of the United States has been slow, particularly in Russia and...

  1. Compensated count-rate circuit for radiation survey meter

    DOE Patents [OSTI]

    Todd, Richard A.

    1981-01-01

    A count-rate compensating circuit is provided which may be used in a portable Geiger-Mueller (G-M) survey meter to ideally compensate for counting loss errors in the G-M tube detector. In a G-M survey meter, wherein the pulse rate from the G-M tube is converted into a pulse rate current applied to a current meter calibrated to indicate dose rate, the compensated circuit generates and controls a reference voltage in response to the rate of pulses from the detector. This reference voltage is gated to the current-generating circuit at a rate identical to the rate of pulses coming from the detector so that the current flowing through the meter is varied in accordance with both the frequency and amplitude of the reference voltage pulses applied thereto so that the count rate is compensated ideally to indicate a true count rate within 1% up to a 50% duty cycle for the detector. A positive feedback circuit is used to control the reference voltage so that the meter output tracks true count rate indicative of the radiation dose rate.

  2. Photon counting spectroscopy as done with a Thomson scattering diagnostic

    SciTech Connect (OSTI)

    Den Hartog, D.J.; Ruppert, D.E.

    1993-11-01

    The measurement and reduction of photon counting spectral data is demonstrated within the context of a Thomson scattering diagnostic. This diagnostic contains a microchannel plate (MCP) photomultiplier tube (PMT) as the photon sensing device. The MCP PMT is not an ideal photon sensor, the loss of photoelectrons at the MCP input and the broad charge pulse distribution at the output add to the uncertainty in recorded data. Computer simulations are used to demonstrate an approach to quantification of this added uncertainty and to develop an understanding of its source; the methodology may be applicable to the development of an understanding of photon detectors other than an MCP PMT. Emphasis is placed on the Poisson statistical character of the data, because the assumption that a Gaussian probability distribution is a reasonable statistical description of photon counting data is often questionable. When the count rate is low, the product the possible number of photon counts and the probability of measurement of a single photon is usually not sufficiently large to justify Gaussian statistics. Rather, because probabilities of measurement are so low, the Poisson probability distribution best quantifies the inherent statistical fluctuations in such counting measurements. The method of maximum likelihood is applied to derive the Poisson statistics equivalent of {sub X}{sup 2}. A Poisson statistics based data fitting code is implemented using the Newton-Raphson method of multi-dimensional root finding; we also demonstrate an algorithm to estimate the uncertainties in derived quantities.

  3. Self-assessing target with automatic feedback

    DOE Patents [OSTI]

    Larkin, Stephen W.; Kramer, Robert L.

    2004-03-02

    A self assessing target with four quadrants and a method of use thereof. Each quadrant containing possible causes for why shots are going into that particular quadrant rather than the center mass of the target. Each possible cause is followed by a solution intended to help the marksman correct the problem causing the marksman to shoot in that particular area. In addition, the self assessing target contains possible causes for general shooting errors and solutions to the causes of the general shooting error. The automatic feedback with instant suggestions and corrections enables the shooter to improve their marksmanship.

  4. A taxonomy of automatic differentiation tools

    SciTech Connect (OSTI)

    Juedes, D.W. . Dept. of Computer Science)

    1991-01-01

    Many of the current automatic differentiation (AD) tools have similar characteristics. Unfortunately, the similarities between these various AD tools often cannot be easily ascertained by reading the corresponding documentation. To clarify this situation, a taxonomy of AD tools is presented. The taxonomy places AD tools into the Elemental, Extensional, Integral, Operational, and Symbolic classes. This taxonomy is used to classify twenty-nine AD tools. Each tool is examined individually with respect to the mode of differentiation used and the degree of derivatives computed. A list detailing the availability of the surveyed AD tools is provided in the Appendix. 54 refs., 3 figs., 1 tab.

  5. Evaluation of potential severe accidents during low power and shutdown operations at Grand Gulf, Unit 1. Volume 5: Analysis of core damage frequency from seismic events for plant operational state 5 during a refueling outage

    SciTech Connect (OSTI)

    Budnitz, R.J.; Davis, P.R.; Ravindra, M.K.; Tong, W.H.

    1994-08-01

    In 1989 the US Nuclear Regulatory Commission (NRC) initiated an extensive program to examine carefully the potential risks during low-power and shutdown operations. The program included two parallel projects, one at Sandia National Laboratories studying a boiling water reactor (Grand Gulf), and the other at Brookhaven National Laboratory studying a pressurized water reactor (Surry Unit 1). Both the Sandia and Brookhaven projects have examined only accidents initiated by internal plant faults---so-called ``internal initiators.`` This project, which has explored the likelihood of seismic-initiated core damage accidents during refueling outage conditions, is complementary to the internal-initiator analyses at Brookhaven and Sandia. This report covers the seismic analysis at Grand Gulf. All of the many systems modeling assumptions, component non-seismic failure rates, and human effort rates that were used in the internal-initiator study at Grand Gulf have been adopted here, so that the results of the study can be as comparable as possible. Both the Sandia study and this study examine only one shutdown plant operating state (POS) at Grand Gulf, namely POS 5 representing cold shutdown during a refueling outage. This analysis has been limited to work analogous to a level-1 seismic PRA, in which estimates have been developed for the core-damage frequency from seismic events during POS 5. The results of the analysis are that the core-damage frequency for earthquake-initiated accidents during refueling outages in POS 5 is found to be quite low in absolute terms, less than 10{sup {minus}7}/year.

  6. Towards automatic planning for manufacturing generative processes

    SciTech Connect (OSTI)

    CALTON,TERRI L.

    2000-05-24

    Generative process planning describes methods process engineers use to modify manufacturing/process plans after designs are complete. A completed design may be the result from the introduction of a new product based on an old design, an assembly upgrade, or modified product designs used for a family of similar products. An engineer designs an assembly and then creates plans capturing manufacturing processes, including assembly sequences, component joining methods, part costs, labor costs, etc. When new products originate as a result of an upgrade, component geometry may change, and/or additional components and subassemblies may be added to or are omitted from the original design. As a result process engineers are forced to create new plans. This is further complicated by the fact that the process engineer is forced to manually generate these plans for each product upgrade. To generate new assembly plans for product upgrades, engineers must manually re-specify the manufacturing plan selection criteria and re-run the planners. To remedy this problem, special-purpose assembly planning algorithms have been developed to automatically recognize design modifications and automatically apply previously defined manufacturing plan selection criteria and constraints.

  7. AUTOMATIC DIFFERENTIATION OF AN EULERIAN HYDROCODE

    SciTech Connect (OSTI)

    R. HENNINGER; A. CARLE; P. MAUDLIN

    2000-11-01

    Automatic differentiation (AD) is applied to a two-dimensional Eulerian hydrodynamics computer code (hydrocode) to provide gradients that will be used for design optimization and uncertainty analysis. We examine AD in both the forward and adjoint (reverse) mode using Automatic Differentiation of Fortran (ADIFOR, version 3.0). Setup time, accuracy, and run times are described for three problems. The test set consists of a one-dimensional shock-propagation problem, a two-dimensional metal-jet-formation problem and a two-dimensional shell-collapse problem. Setup time for ADIFOR was approximately one month starting from a simplified, fixed-dimension version of the original code. ADIFOR produced accurate (as compared to finite difference) gradients in both modes for all of the problems. These test problems had 17 independent variables. We find that the forward mode is up to 39% slower and the adjoint mode is at least 11% faster than finding the gradient by means of finite differences. Problems of real interest will certainly have more independent variables. The adjoint mode is thus favored since the computational time increases only slightly for additional independent variables.

  8. Automatic TLI recognition system, programmer`s guide

    SciTech Connect (OSTI)

    Lassahn, G.D.

    1997-02-01

    This report describes the software of an automatic target recognition system (version 14), from a programmer`s point of view. The intent is to provide information that will help people who wish to modify the software. In separate volumes are a general description of the ATR system, Automatic TLI Recognition System, General Description, and a user`s manual, Automatic TLI Recognition System, User`s Guide. 2 refs.

  9. Beijing Jingyi Century Automatic Equipment Co Ltd | Open Energy...

    Open Energy Info (EERE)

    beijing Jingyi Century Automatic Equipment Co Ltd Place: Beijing Municipality, China Zip: 100079 Product: A Chinese equipment manufacturer provides monosilicon ingot puller and...

  10. DSOPilot project Automatic receipt of short circuiting indicators...

    Open Energy Info (EERE)

    project Automatic receipt of short circuiting indicators Country Denmark Coordinates 56.26392, 9.501785 Loading map... "minzoom":false,"mappingservice":"googlemaps3","type...

  11. Compensated count-rate circuit for radiation survey meter

    DOE Patents [OSTI]

    Todd, R.A.

    1980-05-12

    A count-rate compensating circuit is provided which may be used in a portable Geiger-Mueller (G-M) survey meter to ideally compensate for couting loss errors in the G-M tube detector. In a G-M survey meter, wherein the pulse rate from the G-M tube is converted into a pulse rate current applied to a current meter calibrated to indicate dose rate, the compensation circuit generates and controls a reference voltage in response to the rate of pulses from the detector. This reference voltage is gated to the current-generating circuit at a rate identical to the rate of pulses coming from the detector so that the current flowing through the meter is varied in accordance with both the frequency and amplitude of the reference voltage pulses applied thereto so that the count rate is compensated ideally to indicate a true count rate within 1% up to a 50% duty cycle for the detector. A positive feedback circuit is used to control the reference voltage so that the meter output tracks true count rate indicative of the radiation dose rate.

  12. AUTOMATIC CALIBRATING SYSTEM FOR PRESSURE TRANSDUCERS

    DOE Patents [OSTI]

    Amonette, E.L.; Rodgers, G.W.

    1958-01-01

    An automatic system for calibrating a number of pressure transducers is described. The disclosed embodiment of the invention uses a mercurial manometer to measure the air pressure applied to the transducer. A servo system follows the top of the mercury column as the pressure is changed and operates an analog- to-digital converter This converter furnishes electrical pulses, each representing an increment of pressure change, to a reversible counterThe transducer furnishes a signal at each calibration point, causing an electric typewriter and a card-punch machine to record the pressure at the instant as indicated by the counter. Another counter keeps track of the calibration points so that a number identifying each point is recorded with the corresponding pressure. A special relay control system controls the pressure trend and programs the sequential calibration of several transducers.

  13. Automatic identification of abstract online groups

    DOE Patents [OSTI]

    Engel, David W; Gregory, Michelle L; Bell, Eric B; Cowell, Andrew J; Piatt, Andrew W

    2014-04-15

    Online abstract groups, in which members aren't explicitly connected, can be automatically identified by computer-implemented methods. The methods involve harvesting records from social media and extracting content-based and structure-based features from each record. Each record includes a social-media posting and is associated with one or more entities. Each feature is stored on a data storage device and includes a computer-readable representation of an attribute of one or more records. The methods further involve grouping records into record groups according to the features of each record. Further still the methods involve calculating an n-dimensional surface representing each record group and defining an outlier as a record having feature-based distances measured from every n-dimensional surface that exceed a threshold value. Each of the n-dimensional surfaces is described by a footprint that characterizes the respective record group as an online abstract group.

  14. Automatic feed system for ultrasonic machining

    DOE Patents [OSTI]

    Calkins, Noel C.

    1994-01-01

    Method and apparatus for ultrasonic machining in which feeding of a tool assembly holding a machining tool toward a workpiece is accomplished automatically. In ultrasonic machining, a tool located just above a workpiece and vibrating in a vertical direction imparts vertical movement to particles of abrasive material which then remove material from the workpiece. The tool does not contact the workpiece. Apparatus for moving the tool assembly vertically is provided such that it operates with a relatively small amount of friction. Adjustable counterbalance means is provided which allows the tool to be immobilized in its vertical travel. A downward force, termed overbalance force, is applied to the tool assembly. The overbalance force causes the tool to move toward the workpiece as material is removed from the workpiece.

  15. Automatic image acquisition processor and method

    DOE Patents [OSTI]

    Stone, William J.

    1986-01-01

    A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.

  16. Automatic image acquisition processor and method

    DOE Patents [OSTI]

    Stone, W.J.

    1984-01-16

    A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.

  17. It's the little things that count | National Nuclear Security

    National Nuclear Security Administration (NNSA)

    Administration | (NNSA) It's the little things that count April 16, 2012 OAK RIDGE, Tenn. -- In just five months, the Jack Case Center at the National Nuclear Security Administration's Y-12 National Security Complex has not only achieved compliance with a national building standard for energy sustainability, but has also accomplished a 21.4 percent reduction in energy consumption. File It's the little things th

  18. Modeling patterns in count data using loglinear and related models

    SciTech Connect (OSTI)

    Atwood, C.L.

    1995-12-01

    This report explains the use of loglinear and logit models, for analyzing Poisson and binomial counts in the presence of explanatory variables. The explanatory variables may be unordered categorical variables or numerical variables, or both. The report shows how to construct models to fit data, and how to test whether a model is too simple or too complex. The appropriateness of the methods with small data sets is discussed. Several example analyses, using the SAS computer package, illustrate the methods.

  19. Automatic target recognition apparatus and method

    DOE Patents [OSTI]

    Baumgart, Chris W.; Ciarcia, Christopher A.

    2000-01-01

    An automatic target recognition apparatus (10) is provided, having a video camera/digitizer (12) for producing a digitized image signal (20) representing an image containing therein objects which objects are to be recognized if they meet predefined criteria. The digitized image signal (20) is processed within a video analysis subroutine (22) residing in a computer (14) in a plurality of parallel analysis chains such that the objects are presumed to be lighter in shading than the background in the image in three of the chains and further such that the objects are presumed to be darker than the background in the other three chains. In two of the chains the objects are defined by surface texture analysis using texture filter operations. In another two of the chains the objects are defined by background subtraction operations. In yet another two of the chains the objects are defined by edge enhancement processes. In each of the analysis chains a calculation operation independently determines an error factor relating to the probability that the objects are of the type which should be recognized, and a probability calculation operation combines the results of the analysis chains.

  20. Automatic recovery of missing amplitudes and phases in tilt-limited...

    Office of Scientific and Technical Information (OSTI)

    Automatic recovery of missing amplitudes and phases in tilt-limited electron crystallography of two-dimensional crystals Citation Details In-Document Search Title: Automatic...

  1. Fact #850: December 8, 2014 Automatic Transmissions have closed the Fuel Economy Gap with Manual Transmissions

    Broader source: Energy.gov [DOE]

    Historically, manual transmissions have delivered better fuel economy than automatic transmissions. However, improvements in the efficiency of automatic transmissions have closed that gap in recent...

  2. Development of Counted Single Donor Devices using in-situ Single...

    Office of Scientific and Technical Information (OSTI)

    Development of Counted Single Donor Devices using in-situ Single Ion Detectors on the SNL NanoImplanter. Citation Details In-Document Search Title: Development of Counted Single ...

  3. FTCP-08-001, Methodology for Counting TQP Personnel and Qualifications...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    1, Methodology for Counting TQP Personnel and Qualifications FTCP-08-001, Methodology for Counting TQP Personnel and Qualifications FTCP Issue Paper: FTCP-08-001 Approved by FTCP, ...

  4. Learning How to Count: A High Multiplicity Search for the LHC...

    Office of Scientific and Technical Information (OSTI)

    Learning How to Count: A High Multiplicity Search for the LHC Citation Details In-Document Search Title: Learning How to Count: A High Multiplicity Search for the LHC Authors:...

  5. Neutron counting and gamma spectroscopy with PVT detectors.

    SciTech Connect (OSTI)

    Mitchell, Dean James; Brusseau, Charles A.

    2011-06-01

    Radiation portals normally incorporate a dedicated neutron counter and a gamma-ray detector with at least some spectroscopic capability. This paper describes the design and presents characterization data for a detection system called PVT-NG, which uses large polyvinyl toluene (PVT) detectors to monitor both types of radiation. The detector material is surrounded by polyvinyl chloride (PVC), which emits high-energy gamma rays following neutron capture reactions. Assessments based on high-energy gamma rays are well suited for the detection of neutron sources, particularly in border security applications, because few isotopes in the normal stream of commerce have significant gamma ray yields above 3 MeV. Therefore, an increased count rate for high-energy gamma rays is a strong indicator for the presence of a neutron source. The sensitivity of the PVT-NG sensor to bare {sup 252}Cf is 1.9 counts per second per nanogram (cps/ng) and the sensitivity for {sup 252}Cf surrounded by 2.5 cm of polyethylene is 2.3 cps/ng. The PVT-NG sensor is a proof-of-principal sensor that was not fully optimized. The neutron detector sensitivity could be improved, for instance, by using additional moderator. The PVT-NG detectors and associated electronics are designed to provide improved resolution, gain stability, and performance at high-count rates relative to PVT detectors in typical radiation portals. As well as addressing the needs for neutron detection, these characteristics are also desirable for analysis of the gamma-ray spectra. Accurate isotope identification results were obtained despite the common impression that the absence of photopeaks makes data collected by PVT detectors unsuitable for spectroscopic analysis. The PVT detectors in the PVT-NG unit are used for both gamma-ray and neutron detection, so the sensitive volume exceeds the volume of the detection elements in portals that use dedicated components to detect each type of radiation.

  6. Optimal gate-width setting for passive neutrons multiplicity counting

    SciTech Connect (OSTI)

    Croft, Stephen; Evans, Louise G; Schear, Melissa A

    2010-01-01

    When setting up a passive neutron coincidence counter it is natural to ask what coincidence gate settings should be used to optimize the counting precision. If the gate width is too short then signal is lost and the precision is compromised because in a given period only a few coincidence events will be observed. On the other hand if the gate is too large the signal will be maximized but it will also be compromised by the high level of random pile-up or Accidental coincidence events which must be subtracted. In the case of shift register electronics connected to an assay chamber with an exponential dieaway profile operating in the regime where the Accidentals rate dominates the Reals coincidence rate but where dead-time is not a concern, simple arguments allow one to show that the relative precision on the net Reals rate is minimized when the coincidence gate is set to about 1.2 times the lie dieaway time of the system. In this work we show that making the same assumptions it is easy to show that the relative precision on the Triples rates is also at a minimum when the relative precision of the Doubles (or Reals) is at a minimum. Although the analysis is straightforward to our knowledge such a discussion has not been documented in the literature before. Actual measurement systems do not always behave in the ideal we choose to model them. Fortunately however the variation in the relative precision as a function of gate width is rather flat for traditional safeguards counters and so the performance is somewhat forgiving of the exact choice. The derivation further serves to delineate the important parameters which determine the relative counting precision of the Doubles and Triples rates under the regime considered. To illustrate the similarities and differences we consider the relative standard deviation that might be anticipated for a passive correlation count of an axial section of a spent nuclear fuel assembly under practically achievable conditions.

  7. Los Alamos Middle School team wins regional MathCounts competition

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Los Alamos Middle School team wins Regional MathCounts competition Community Connections: Your link to news and opportunities from Los Alamos National Laboratory Latest Issue: September 1, 2016 all issues All Issues » submit Los Alamos Middle School wins regional MathCounts event Competes against 60 other middle schools for the title. March 1, 2013 Los Alamos Middle School won the regional MathCounts competition. Los Alamos Middle School won the regional MathCounts competition. Contacts Editor

  8. Automatic contact in DYNA3D for vehicle crashworthiness

    SciTech Connect (OSTI)

    Whirley, R.G.; Engelmann, B.E.

    1993-07-15

    This paper presents a new formulation for the automatic definition and treatment of mechanical contact in explicit nonlinear finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. This paper discusses in detail a new four-step automatic contact algorithm. Key aspects of the proposed method include automatic identification of adjacent and opposite surfaces in the global search phase, and the use of a smoothly varying surface normal which allows a consistent treatment of shell intersection and corner contact conditions without ad-hoc rules. The paper concludes with three examples which illustrate the performance of the newly proposed algorithm in the public DYNA3D code.

  9. VEE-0043- In the Matter of Greenville Automatic Gas Company

    Broader source: Energy.gov [DOE]

    On March 11, 1997, Greenville Automatic Gas Company (Greenville) filed an Application for Exception with the Office of Hearings and Appeals (OHA) of the Department of Energy (DOE). In its...

  10. Security Requirements for Classified Automatic Data Processing Systems

    Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

    1985-07-10

    To establish and describe the computer security program for classified automatic data processing (ADP) systems at the Department of Energy (DOE) Headquarters. This directive does not cancel another directive. Canceled by DOE N 251.9.

  11. Install and Automatic Blowdown Control System - Steam Tip Sheet #23

    SciTech Connect (OSTI)

    2012-01-01

    This revised AMO steam tip sheet on installing automatic blowdown controls provide how-to advice for improving industrial steam systems using low-cost, proven practices and technologies.

  12. Metals processing control by counting molten metal droplets

    DOE Patents [OSTI]

    Schlienger, Eric; Robertson, Joanna M.; Melgaard, David; Shelmidine, Gregory J.; Van Den Avyle, James A.

    2000-01-01

    Apparatus and method for controlling metals processing (e.g., ESR) by melting a metal ingot and counting molten metal droplets during melting. An approximate amount of metal in each droplet is determined, and a melt rate is computed therefrom. Impedance of the melting circuit is monitored, such as by calculating by root mean square a voltage and current of the circuit and dividing the calculated current into the calculated voltage. Analysis of the impedance signal is performed to look for a trace characteristic of formation of a molten metal droplet, such as by examining skew rate, curvature, or a higher moment.

  13. Intra-Hour Dispatch and Automatic Generator Control Demonstration with

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Solar Forecasting | Department of Energy Intra-Hour Dispatch and Automatic Generator Control Demonstration with Solar Forecasting Intra-Hour Dispatch and Automatic Generator Control Demonstration with Solar Forecasting UCSD logo2.png The University of California at San Diego (UCSD) is leading a project that will reduce power system operation cost by providing a prediction of the generation fleet's behavior in real time for realistic photovoltaic penetration scenarios. APPROACH The primary

  14. Automatic Performance Collection (AutoPerf) | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Automatic

  15. Optical People Counting for Demand Controlled Ventilation: A Pilot Study of Counter Performance

    SciTech Connect (OSTI)

    Fisk, William J.; Sullivan, Douglas

    2009-12-26

    This pilot scale study evaluated the counting accuracy of two people counting systems that could be used in demand controlled ventilation systems to provide control signals for modulating outdoor air ventilation rates. The evaluations included controlled challenges of the people counting systems using pre-planned movements of occupants through doorways and evaluations of counting accuracies when naive occupants (i.e., occupants unaware of the counting systems) passed through the entrance doors of the building or room. The two people counting systems had high counting accuracy accuracies, with errors typically less than 10percent, for typical non-demanding counting events. However, counting errors were high in some highly challenging situations, such as multiple people passing simultaneously through a door. Counting errors, for at least one system, can be very high if people stand in the field of view of the sensor. Both counting system have limitations and would need to be used only at appropriate sites and where the demanding situations that led to counting errors were rare.

  16. Evaluation of potential severe accidents during low power and shutdown operations at Grand Gulf, Unit 1: Analysis of core damage frequency from internal events for Plant Operational State 5 during a refueling outage. Volume 2, Part 3: Internal Events Appendices I and J

    SciTech Connect (OSTI)

    Yakle, J.; Darby, J.; Whitehead, D.; Staple, B.

    1994-06-01

    This report provides supporting documentation for various tasks associated with the performance of the probablistic risk assessment for Plant Operational State 5 during a refueling outage at Grand Gulf, Unit 1 as documented in Volume 2, Part 1 of NUREG/CR-6143.

  17. Full-scale demonstration of low-NO{sub x} cell{trademark} burner retrofit: Addendum to long-term testing report, September 1994 outage: Examination of corrosion test panel and UT survey in DP&L Unit {number_sign}4

    SciTech Connect (OSTI)

    Kung, S.C.; Kleisley, R.J.

    1995-06-01

    As part of this DOE`s demonstration program, a corrosion test panel was installed on the west sidewall of Dayton Power & Light Unit no.4 at the J. M. Stuart Station (JMSS4) during the burner retrofit outage in November 1991. The test panel consisted of four sections of commercial coatings separated by bare SA213-T2 tubing. During the retrofit outage, a UT survey was performed to document the baseline wall thicknesses of the test panel, as well as several furnace wall areas outside the test panel. The purpose of the UT survey was to generate the baseline data so that the corrosion wastage associated with the operation of Low NO{sub x} Cell Burners (LNCB{trademark} burner) could be quantitatively determined. The corrosion test panel in JMSS4 was examined in April 1993 after the first 15-month operation of the LNCB{trademark} burners. Details of the corrosion analysis and UT data were documented in the Long-Term Testing Report. The second JMSS4 outage following the LNCB{trademark} burner retrofit took place in September 1944. Up to this point, the test panel in JMSS4 had been exposed to the corrosive combustion environment for approximately 31 months under normal boiler operation of JMSS4. This test period excluded the down time for the April 1993 outage. During the September 1994 outage, 70 tube samples of approximately one-foot length were cut from the bottom of the test panel. These samples were evaluated by the Alliance Research Center of B&W using the same metallurgical techniques as those employed for the previous outage. In addition, UT measurements were taken on the same locations of the lower furnace walls in JMSS4 as those during the prior outages. Results of the metallurgical analyses and UT surveys from different exposure times were compared, and the long-term performance of waterwall materials was analyzed. The corrosion data obtained from the long-term field study at JMSS4 after 32 months of LNCB{trademark} burner operation are summarized in this report.

  18. Application of clearance automatic laser inspection system to clearance measurement of concrete waste

    SciTech Connect (OSTI)

    Sasaki, Michiya; Ogino, Haruyuki; Hattori, Takatoshi

    2007-07-01

    Recently, the Clearance Automatic Laser Inspection System (CLALIS) has been developed for the clearance measurement of metal scraps. It utilizes three-dimensional (3D) laser scanning, y-ray measurement and Monte Carlo calculation, and has outstanding detection ability. For the clearance measurement of concrete segments, the effect of background (BG) gamma rays from natural radionuclides in the measurement target, such as K-40 and the radioactive decay products of Th-232 and U-238, should be compensated for to ensure adequate waste management. Since NE102A plastic scintillation detectors are used for y-ray measurement in CLALIS, it is impossible to distinguish between count rates of natural radionuclides and contaminants on the basis of gamma-ray energy information. To apply CLALIS to the clearance measurement of concrete segments, the original activity evaluation method was improved by adding a new compensation procedure. In this procedure, BG count rate due to natural radionuclides is estimated by a Monte Carlo calculation with pre-analyzed data of a representative sample of the measurement target. The activity concentration of natural radionuclides in concrete differs markedly depending on the production location of its components, such as cement and aggregates. In this study, using six mock concrete waste samples, which were composed of cement and fine aggregate from various production locations, the accuracy of BG compensation was experimentally estimated. In addition, the accuracy of calibration for concrete waste was also estimated using a number of mock concrete segments of small and large triangular prisms. By considering the uncertainties of BG compensation and calibration, the detection limit of CLALIS for concrete waste was estimated. As a result, it was revealed that CLALIS could be applied to the clearance measurement of concrete segments when the mass of the measurement target is greater than approximately 1.1 kg and the key radionuclide is Co-60

  19. Active Well Counting Using New PSD Plastic Detectors

    SciTech Connect (OSTI)

    Hausladen, Paul; Newby, Jason; McElroy, Robert Dennis

    2015-11-01

    This report presents results and analysis from a series of proof-of-concept measurements to assess the suitability of segmented detectors constructed from Eljen EJ-299-34 PSD-plastic scintillator with pulse-shape discrimination capability for the purposes of quantifying uranium via active neutron coincidence counting. Present quantification of bulk uranium materials for international safeguards and domestic materials control and accounting relies on active neutron coincidence counting systems, such as the Active Well Coincidence Counter (AWCC) and the Uranium Neutron Coincidence Collar (UNCL), that use moderated He-3 proportional counters along with necessarily low-intensity 241Am(Li) neutron sources. Scintillation-based fast-neutron detectors are a potentially superior technology to the existing AWCC and UNCL designs due to their spectroscopic capability and their inherently short neutron coincidence times that largely eliminate random coincidences and enable interrogation by stronger sources. One of the past impediments to the investigation and adoption of scintillation counters for the purpose of quantifying bulk uranium was the commercial availability of scintillators having the necessary neutron-gamma pulse-shape discrimination properties only as flammable liquids. Recently, Eljen EJ-299-34 PSD-plastic scintillator became commercially available. The present work is the first assessment of an array of PSD-plastic detectors for the purposes of quantifying bulk uranium. The detector panel used in the present work was originally built as the focal plane for a fast-neutron imager, but it was repurposed for the present investigation by construction of a stand to support the inner well of an AWCC immediately in front of the detector panel. The detector panel and data acquisition of this system are particularly well suited for performing active-well fast-neutron counting of LEU and HEU samples because the active detector volume is solid, the 241Am(Li) interrogating

  20. Doubles counting of highly multiplying items in reflective surroundings

    SciTech Connect (OSTI)

    Croft, Stephen; Evans, Louise G; Schear, Melissa A; Tobin, Stephen J

    2010-11-18

    When a neutrons are counted from a spontaneously fissile multiplying item in a reflecting environment the temporal behavior of the correlated signal following neutron birth is complex. At early times the signal is dominated by prompt fission events coming from spontaneous fission bursts and also from prompt fast-neutron induced fission events. At later times neutrons 'returning' from the surroundings induce fission and give rise to an additional chain of correlated events. The prompt and returning components probe the fissile and fertile constituents of the item in different ways and it is potentially beneficial to exploit this fact. In this work we look at how the two components can be represented using a linear combination of two simple functions. Fitting of the composite function to the capture time distribution represents one way of quantifying the proportion of each contribution. Another approach however is to use a dual shift register analysis where after each triggering event two coincidence gates are opened, one close to the trigger that responds preferentially to the prompt dynamics and one later in time which is more sensitive to the returning neutron induced events. To decide on the best gate positions and gate widths and also to estimate the counting precision we can use the analytical fit to work out the necessary gate utilization factors which are required in both these calculations. In this work, we develop the approach. Illustrative examples are given using spent Low Enriched Uranium (LEU) Pressurized light Water Reactor (LWR) fuel assemblies submersed in borated water and counted in a ring of {sup 3}He gas-filled proportional counters. In this case the prompt component is dominated by {sup 244}Cm spontaneous fission and induced fast neutron fission in for example {sup 238}U while the returning low energy neutrons induce fission mainly in the fissile nuclides such as {sup 239}Pu, {sup 241}Pu and {sup 235}U. One requirement is to calculate the Random

  1. Neutron multiplicity counting: Confidence intervals for reconstruction parameters

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Verbeke, Jerome M.

    2016-03-09

    From nuclear materials accountability to homeland security, the need for improved nuclear material detection, assay, and authentication has grown over the past decades. Starting in the 1940s, neutron multiplicity counting techniques have enabled quantitative evaluation of masses and multiplications of fissile materials. In this paper, we propose a new method to compute uncertainties on these parameters using a model-based sequential Bayesian processor, resulting in credible regions in the fissile material mass and multiplication space. These uncertainties will enable us to evaluate quantitatively proposed improvements to the theoretical fission chain model. Additionally, because the processor can calculate uncertainties in real time,more » it is a useful tool in applications such as portal monitoring: monitoring can stop as soon as a preset confidence of non-threat is reached.« less

  2. Status of in-plant neutron coincidence counting

    SciTech Connect (OSTI)

    Enssling, N.; Krick, M.; Menlove, H.; Stewart, J.

    1986-01-01

    Neutron coincidence counters are used in nuclear material processing plants to assay bulk quantities of plutonium or uranium. Passive assays of plutonium are often made with the High-Level Neutron Counter (HLNC or HLNC-II), the Dual-Range Coincidence Counter, or customized detector geometries. Active assays of uranium are often made with the Active Well Coincidence Counter or the Uranium Neutron Coincidence Collar. Modern counters may have flattened efficiency profiles, fast AMPTEK amplifier/discriminators mounted directly next to the /sup 3/He detection tubes, external background shields, or special sample-loading mechanisms. Typical counting times and accuracies that can be obtained for plutonium are summarized. If isotopic composition is known, large plutonium samples can be assayed in 100 to 200 s - comparable to the time requires to input sample data into the counter's calculator or computer.

  3. The power grid of the future is a platform that

    Energy Savers [EERE]

    automatically report outages, smart relays that sense and recover from faults in the substation automatically, automated feeder switches that re-route power around problems, and ...

  4. A new automatic contact formulation in DYNA3D

    SciTech Connect (OSTI)

    Whirley, R.G.; Engelmann, B.E.

    1993-08-01

    This paper presents a new approach for the automatic definition and treatment of mechanical contact in DYNA3D. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but must maintain high reliability and acceptable computational costs. The major features of the proposed new method include automatic identification of potentially contacting surfaces during the initialization phase, a new high-performance contact search procedure, and the use of a well-defined surface normal which allows a consistent treatment of shell intersection and corner contact conditions without ad-hoc rules. Three examples are presented which illustrate the performance of newly proposed algorithm in the public DYNA3D code.

  5. Program Your Thermostat for Automatic Savings | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Automatic Savings Program Your Thermostat for Automatic Savings December 23, 2008 - 4:00am Addthis Allison Casey Senior Communicator, NREL Did you know that you can save around 10% a year on your heating and cooling bills by simply turning your thermostat back 10°-15°F for eight hours? Sounds great, but I know when I'm rushing out the door, the last thing on my mind is turning down the thermostat. Luckily, programmable thermostats make it easy to save without much effort. In fact, once you get

  6. A new approach to automatic radiation spectrum analysis

    SciTech Connect (OSTI)

    Olmos, P.; Diaz, J.C.; Perez, J.M.; Aguayo, P.; Bru, A.; Garcia-Belmonte, G.; de Pablos, J.L. ); Gomez, P.; Rodellar, V. )

    1991-08-01

    In this paper the application of adaptive methods to the solution of the automatic radioisotope identification problem using the energy spectrum is described. The identification is carried out by means of neural networks, which allow the use of relatively reduced computational structures, while keeping high pattern recognition capability. In this context, it has been found that one of these simple structures, once adequately trained, is quite suitable to identify a given isotope present in a mixture of elements as well as the relative proportions of each identified substance. Preliminary results are good enough to consider these adaptive structures as powerful and simple tools in the automatic spectrum analysis.

  7. Working with SRNL - Our Facilities- Ultra Low-Level Underground Counting

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility Ultra Low-Level Underground Counting Facility Working with SRNL Our Facilities - Ultra Low-Level Underground Counting Facility The Ultra Low-Level Underground Counting Facility is the only facility of its kind in the country. This facility is located 50 feet below ground level, and has four-inch thick walls of pre-nuclear weapons era steel. This allows highly sensitive measurements of ultra-low amounts of environmental radioactivity, free from interference by background radiatio

  8. Cryogenic, high-resolution x-ray detector with high count rate capability

    DOE Patents [OSTI]

    Frank, Matthias; Mears, Carl A.; Labov, Simon E.; Hiller, Larry J.; Barfknecht, Andrew T.

    2003-03-04

    A cryogenic, high-resolution X-ray detector with high count rate capability has been invented. The new X-ray detector is based on superconducting tunnel junctions (STJs), and operates without thermal stabilization at or below 500 mK. The X-ray detector exhibits good resolution (.about.5-20 eV FWHM) for soft X-rays in the keV region, and is capable of counting at count rates of more than 20,000 counts per second (cps). Simple, FET-based charge amplifiers, current amplifiers, or conventional spectroscopy shaping amplifiers can provide the electronic readout of this X-ray detector.

  9. Modeling the Number of Ignitions Following an Earthquake: Developing Prediction Limits for Overdispersed Count Data

    Broader source: Energy.gov [DOE]

    Modeling the Number of Ignitions Following an Earthquake: Developing Prediction Limits for Overdispersed Count Data Elizabeth J. Kelly and Raymond N. Tell

  10. An Automatic Doppler Spectrum Classifier for the MMCRs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    An Automatic Doppler Spectrum Classifier for the MMCRs Luke, Edward Brookhaven National Laboratory Kollias, Pavlos Brookhaven National Laboratory Clothiaux, Eugene The Pennsylvania State University Johnson, Karen Brookhaven National Laboratory Miller, Mark Brookhaven National Laboratory Widener, Kevin Pacific Northwest National Laboratory Jensen, Michael Brookhaven National Laboratory Vogelmann, Andrew Brookhaven National Laboratory Category: Instruments The ARM MMCR receivers at the SGP, NSA

  11. Choosing Actuators for Automatic Control Systems of Thermal Power Plants

    SciTech Connect (OSTI)

    Gorbunov, A. I.; Serdyukov, O. V.

    2015-03-15

    Two types of actuators for automatic control systems of thermal power plants are analyzed: (i) pulse-controlled actuator and (ii) analog-controlled actuator with positioning function. The actuators are compared in terms of control circuit, control accuracy, reliability, and cost.

  12. Evaluation of potential severe accidents during low power and shutdown operations at Grand Gulf, Unit 1. Volume 2, Part 1C: Analysis of core damage frequency from internal events for plant operational State 5 during a refueling outage, Main report (Sections 11--14)

    SciTech Connect (OSTI)

    Whitehead, D.; Darby, J.; Yakle, J.

    1994-06-01

    This document contains the accident sequence analysis of internally initiated events for Grand Gulf, Unit 1 as it operates in the Low Power and Shutdown Plant Operational State 5 during a refueling outage. The report documents the methodology used during the analysis, describes the results from the application of the methodology, and compares the results with the results from two full power analyses performed on Grand Gulf.

  13. Evaluation of potential severe accidents during low power and shutdown operations at Grand Gulf, Unit 1: Evaluation of severe accident risks for plant operational state 5 during a refueling outage. Main report and appendices, Volume 6, Part 1

    SciTech Connect (OSTI)

    Brown, T.D.; Kmetyk, L.N.; Whitehead, D.; Miller, L.; Forester, J.; Johnson, J.

    1995-03-01

    Traditionally, probabilistic risk assessments (PRAS) of severe accidents in nuclear power plants have considered initiating events potentially occurring only during full power operation. Recent studies and operational experience have, however, implied that accidents during low power and shutdown could be significant contributors to risk. In response to this concern, in 1989 the Nuclear Regulatory Commission (NRC) initiated an extensive program to carefully examine the potential risks during low power and shutdown operations. Two plants, Surry (pressurized water reactor) and Grand Gulf (boiling water reactor), were selected as the plants to be studied. The program consists of two parallel projects being performed by Brookhaven National Laboratory (Surry) and Sandia National Laboratories (Grand Gulf). The program objectives include assessing the risks of severe accidents initiated during plant operational states other than full power operation and comparing the estimated risks with the risk associated with accidents initiated during full power operation as assessed in NUREG-1150. The scope of the program is that of a Level-3 PRA. The subject of this report is the PRA of the Grand Gulf Nuclear Station, Unit 1. The Grand Gulf plant utilizes a 3833 MWt BUR-6 boiling water reactor housed in a Mark III containment. The Grand Gulf plant is located near Port Gibson, Mississippi. The regime of shutdown analyzed in this study was plant operational state (POS) 5 during a refueling outage, which is approximately Cold Shutdown as defined by Grand Gulf Technical Specifications. The entire PRA of POS 5 is documented in a multi-volume NUREG report (NUREG/CR-6143). The internal events accident sequence analysis (Level 1) is documented in Volume 2. The Level 1 internal fire and internal flood analyses are documented in Vols 3 and 4, respectively.

  14. Automatic anatomically selective image enhancement in digital chest radiography

    SciTech Connect (OSTI)

    Sezan, M.I. ); Minerbo, G.N. ); Schaetzing, R. )

    1989-06-01

    The authors develop a technique for automatic anatomically selective enhancement of digital chest radiographs. Anatomically selective enhancement is motivated by the desire to simultaneously meet the different enhancement requirements of the lung field and the mediastinum. A recent peak detection algorithm and a set of rules are applied to the image histogram to determine automatically a gray-level threshold between the lung field and mediastinum. The gray-level threshold facilitates anatomically selective gray-scale modification and/or unsharp masking. Further, in an attempt to suppress possible white-band or black-band artifacts due to unsharp masking at sharp edges, local-contrast adaptivity is incorporated into anatomically selective unsharp masking by designing an anatomy-sensitive emphasis parameter which varies asymmetrically with positive and negative values of the local image contrast.

  15. Mining Software Usage with the Automatic Library Tracking Database (ALTD)

    SciTech Connect (OSTI)

    Hadri, Bilel; Fahey, Mark R

    2013-01-01

    Tracking software usage is important for HPC centers, computer vendors, code developers and funding agencies to provide more efficient and targeted software support, and to forecast needs and guide HPC software effort towards the Exascale era. However, accurately tracking software usage on HPC systems has been a challenging task. In this paper, we present a tool called Automatic Library Tracking Database (ALTD) that has been developed and put in production on several Cray systems. The ALTD infrastructure prototype automatically and transparently stores information about libraries linked into an application at compilation time and also the executables launched in a batch job. We will illustrate the usage of libraries, compilers and third party software applications on a system managed by the National Institute for Computational Sciences.

  16. Fault Detection Tool Project: Automatic Discovery of Faults using Machine

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Learning Fault Detection Tool Project: Automatic Discovery of Faults using Machine Learning - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization

  17. Fact #850: December 8, 2014 Automatic Transmissions have closed...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Type Automatic Manual 1980 17.1 18.2 1981 18.4 20.1 1982 20.2 22.2 1983 21.0 23.0 1984 20.8 22.5 1985 21.3 22.9 1986 21.3 22.8 1987 20.9 22.2 1988 20.9 22.0 1989 20.7 22.0 ...

  18. The IAEA neutron coincidence counting (INCC) and the DEMING least-squares fitting programs

    SciTech Connect (OSTI)

    Krick, M.S.; Harker, W.C.; Rinard, P.M.; Wenz, T.R.; Lewis, W.; Pham, P.; Ridder, P. de

    1998-12-01

    Two computer programs are described: (1) the INCC (IAEA or International Neutron Coincidence Counting) program and (2) the DEMING curve-fitting program. The INCC program is an IAEA version of the Los Alamos NCC (Neutron Coincidence Counting) code. The DEMING program is an upgrade of earlier Windows{reg_sign} and DOS codes with the same name. The versions described are INCC 3.00 and DEMING 1.11. The INCC and DEMING codes provide inspectors with the software support needed to perform calibration and verification measurements with all of the neutron coincidence counting systems used in IAEA inspections for the nondestructive assay of plutonium and uranium.

  19. Refinery Outages: First-Half 2016

    U.S. Energy Information Administration (EIA) Indexed Site

    Analysis & Projections Glossary › FAQS › Overview Projection Data Monthly short-term forecasts to 2016 Annual projections to 2040 International projections All projections reports Analysis & Projections Major Topics Most popular Annual Energy Outlook related Congressional & other requests International Energy Outlook related Presentations Recurring Short-Term Outlook Related Special outlooks Testimony All reports Browse by Tag Alphabetical Frequency Tag Cloud Full report Previous

  20. The LANL C-NR counting room and fission product yields

    SciTech Connect (OSTI)

    Jackman, Kevin Richard

    2015-09-21

    This PowerPoint presentation focused on the following areas: LANL C-NR counting room; Fission product yields; Los Alamos Neutron wheel experiments; Recent experiments ad NCERC; and Post-detonation nuclear forensics

  1. Self-Calibrated Cluster Counts as a Probe of Primordial Non-Gaussianity

    SciTech Connect (OSTI)

    Oguri, Masamune; /KIPAC, Menlo Park

    2009-05-07

    We show that the ability to probe primordial non-Gaussianity with cluster counts is drastically improved by adding the excess variance of counts which contains information on the clustering. The conflicting dependences of changing the mass threshold and including primordial non-Gaussianity on the mass function and biasing indicate that the self-calibrated cluster counts well break the degeneracy between primordial non-Gaussianity and the observable-mass relation. Based on the Fisher matrix analysis, we show that the count variance improves constraints on f{sub NL} by more than an order of magnitude. It exhibits little degeneracy with dark energy equation of state. We forecast that upcoming Hyper Suprime-cam cluster surveys and Dark Energy Survey will constrain primordial non-Gaussianity at the level {sigma}(f{sub NL}) {approx} 8, which is competitive with forecasted constraints from next-generation cosmic microwave background experiments.

  2. U.S. oil production forecast update reflects lower rig count

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    U.S. oil production forecast update reflects lower rig count Lower oil prices and fewer rigs drilling for crude oil are expected to slow U.S. oil production growth this year and in ...

  3. Don't Count Your Ions Before They Dissociate | U.S. DOE Office...

    Office of Science (SC) Website

    Don't Count Your Ions Before They Dissociate Basic Energy Sciences (BES) BES Home About ... The green shading represents the 99.98% of the molecules that exist in a neutral, or ...

  4. Automatic generation of warehouse mediators using an ontology engine

    SciTech Connect (OSTI)

    Critchlow, T., LLNL

    1998-03-04

    The Data Foundry research project at LLNL is investigating data warehousing in highly dynamic scientific environments. Specifically, we are developing a data warehouse to aid structural biologists in genetics research. Upon completion, this warehouse will present a uniform view of data obtained from several heterogeneous data sources containing distinct but related data from various genetics domains. Our warehouse uses a mediated data warehouse architecture in which only some data is represented explicitly in the warehouse; remote access is required to obtain the non-materialized data. Mediators are used to convert data from the data source representation to the warehouse representation and make it available to the warehouse. The major challenge we face is reducing the impact of source schema changes on warehouse availability and reliability: based upon previous efforts, we anticipate one source schema modification every 2-4 weeks once all of the desired sources have been integrated. Incorporating these modifications into the mediators using brute force results in an unacceptable amount of warehouse down-time. We believe that extensive use of a carefully designed ontology will allow us to overcome this problem, while providing a useful knowledge base for other applications. In addition to automatically generating the transformation between the data sources and the warehouse, the ontology will be used to guide automatic schema evolution, and provide a high level interface to the warehouse. This paper focuses on the use of the ontology to automatically generate mediators, because reducing the effect of source changes is a critical step in providing reliable access to heterogeneous data sources.

  5. Automatic inspection system for nuclear fuel pellets or rods

    DOE Patents [OSTI]

    Miller, Jr., William H.; Sease, John D.; Hamel, William R.; Bradley, Ronnie A.

    1978-01-01

    An automatic inspection system is provided for determining surface defects on cylindrical objects such as nuclear fuel pellets or rods. The active element of the system is a compound ring having a plurality of pneumatic jet units directed into a central bore. These jet units are connected to provide multiple circuits, each circuit being provided with a pressure sensor. The outputs of the sensors are fed to a comparator circuit whereby a signal is generated when the difference of pressure between pneumatic circuits, caused by a defect, exceeds a pre-set amount. This signal may be used to divert the piece being inspected into a "reject" storage bin or the like.

  6. Control System Design for Automatic Cavity Tuning Machines

    SciTech Connect (OSTI)

    Carcagno, R.; Khabiboulline, T.; Kotelnikov, S.; Makulski, A.; Nehring, R.; Nogiec, J.; Ross, M.; Schappert, W.; /Fermilab; Goessel, A.; Iversen, J.; Klinke, D.; /DESY

    2009-05-01

    A series of four automatic tuning machines for 9-cell TESLA-type cavities are being developed and fabricated in a collaborative effort among DESY, FNAL, and KEK. These machines are intended to support high-throughput cavity fabrication for construction of large SRF-based accelerator projects. Two of these machines will be delivered to cavity vendors for the tuning of XFEL cavities. The control system for these machines must support a high level of automation adequate for industrial use by non-experts operators. This paper describes the control system hardware and software design for these machines.

  7. Rapid automatic keyword extraction for information retrieval and analysis

    DOE Patents [OSTI]

    Rose, Stuart J; Cowley,; Wendy E; Crow, Vernon L; Cramer, Nicholas O

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  8. Reactor protection system with automatic self-testing and diagnostic

    DOE Patents [OSTI]

    Gaubatz, Donald C.

    1996-01-01

    A reactor protection system having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Automatic detection and discrimination against failed sensors allows the reactor protection system to automatically enter a known state when sensor failures occur. Cross communication of sensor readings allows comparison of four theoretically "identical" values. This permits identification of sensor errors such as drift or malfunction. A diagnostic request for service is issued for errant sensor data. Automated self test and diagnostic monitoring, sensor input through output relay logic, virtually eliminate the need for manual surveillance testing. This provides an ability for each division to cross-check all divisions and to sense failures of the hardware logic.

  9. Automatic Blocking Of QR and LU Factorizations for Locality

    SciTech Connect (OSTI)

    Yi, Q; Kennedy, K; You, H; Seymour, K; Dongarra, J

    2004-03-26

    QR and LU factorizations for dense matrices are important linear algebra computations that are widely used in scientific applications. To efficiently perform these computations on modern computers, the factorization algorithms need to be blocked when operating on large matrices to effectively exploit the deep cache hierarchy prevalent in today's computer memory systems. Because both QR (based on Householder transformations) and LU factorization algorithms contain complex loop structures, few compilers can fully automate the blocking of these algorithms. Though linear algebra libraries such as LAPACK provides manually blocked implementations of these algorithms, by automatically generating blocked versions of the computations, more benefit can be gained such as automatic adaptation of different blocking strategies. This paper demonstrates how to apply an aggressive loop transformation technique, dependence hoisting, to produce efficient blockings for both QR and LU with partial pivoting. We present different blocking strategies that can be generated by our optimizer and compare the performance of auto-blocked versions with manually tuned versions in LAPACK, both using reference BLAS, ATLAS BLAS and native BLAS specially tuned for the underlying machine architectures.

  10. Reactor protection system with automatic self-testing and diagnostic

    DOE Patents [OSTI]

    Gaubatz, D.C.

    1996-12-17

    A reactor protection system is disclosed having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Automatic detection and discrimination against failed sensors allows the reactor protection system to automatically enter a known state when sensor failures occur. Cross communication of sensor readings allows comparison of four theoretically ``identical`` values. This permits identification of sensor errors such as drift or malfunction. A diagnostic request for service is issued for errant sensor data. Automated self test and diagnostic monitoring, sensor input through output relay logic, virtually eliminate the need for manual surveillance testing. This provides an ability for each division to cross-check all divisions and to sense failures of the hardware logic. 16 figs.

  11. Automatic classification of time-variable X-ray sources

    SciTech Connect (OSTI)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  12. Automatic Thread-Level Parallelization in the Chombo AMR Library

    SciTech Connect (OSTI)

    Christen, Matthias; Keen, Noel; Ligocki, Terry; Oliker, Leonid; Shalf, John; Van Straalen, Brian; Williams, Samuel

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number of existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.

  13. Cosmic ray neutron background reduction using localized coincidence veto neutron counting

    DOE Patents [OSTI]

    Menlove, Howard O.; Bourret, Steven C.; Krick, Merlyn S.

    2002-01-01

    This invention relates to both the apparatus and method for increasing the sensitivity of measuring the amount of radioactive material in waste by reducing the interference caused by cosmic ray generated neutrons. The apparatus includes: (a) a plurality of neutron detectors, each of the detectors including means for generating a pulse in response to the detection of a neutron; and (b) means, coupled to each of the neutrons detectors, for counting only some of the pulses from each of the detectors, whether cosmic ray or fission generated. The means for counting includes a means that, after counting one of the pulses, vetos the counting of additional pulses for a prescribed period of time. The prescribed period of time is between 50 and 200 .mu.s. In the preferred embodiment the prescribed period of time is 128 .mu.s. The veto means can be an electronic circuit which includes a leading edge pulse generator which passes a pulse but blocks any subsequent pulse for a period of between 50 and 200 .mu.s. Alternately, the veto means is a software program which includes means for tagging each of the pulses from each of the detectors for both time and position, means for counting one of the pulses from a particular position, and means for rejecting those of the pulses which originate from the particular position and in a time interval on the order of the neutron die-away time in polyethylene or other shield material. The neutron detectors are grouped in pods, preferably at least 10. The apparatus also includes means for vetoing the counting of coincidence pulses from all of the detectors included in each of the pods which are adjacent to the pod which includes the detector which produced the pulse which was counted.

  14. 2014-03-07 Issuance: Energy Conservation Standards for Automatic Commercial

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Ice Makers; Notice of Proposed Rulemaking | Department of Energy 07 Issuance: Energy Conservation Standards for Automatic Commercial Ice Makers; Notice of Proposed Rulemaking 2014-03-07 Issuance: Energy Conservation Standards for Automatic Commercial Ice Makers; Notice of Proposed Rulemaking This document is a pre-publication Federal Register notice of proposed rulemaking regarding energy conservation standards for automatic commercial ice makers, as issued by the Assistant Secretary on

  15. Semi-automatic delineation using weighted CT-MRI registered images...

    Office of Scientific and Technical Information (OSTI)

    cancer Citation Details In-Document Search Title: Semi-automatic delineation using weighted CT-MRI registered images for radiotherapy of nasopharyngeal cancer Purpose: ...

  16. Vehicle Technologies Office Merit Review 2014: System for Automatically Maintaining Pressure in a Commercial Truck Tire

    Office of Energy Efficiency and Renewable Energy (EERE)

    Presentation given by Goodyear at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about a system for automatically...

  17. FLOP Counts for "Small" Single-Node Miniapplication Tests

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    FLOP Counts for "Small" Single-Node Miniapplication Tests FLOP Counts for "Small" Single-Node Miniapplication Tests These data, obtained using the NERSC Hopper system, are provided for reference. Code MPI Tasks Threads Reference TFLOP Count Benchmark Time (seconds) # of iterations miniFE 144 1 5.05435E+12 130.2 (total program time) miniGhost 96 1 6.55500E+12 76.5 AMG 96 1 1.30418E+12 66.95 18 UMT 96 1 1.30211E+13 416.99 49 SNAP 96 1 5.84246E+11 15.37 3059 miniDFT 40 1

  18. Automatic generation of warehouse mediators using an ontology engine

    SciTech Connect (OSTI)

    Critchlow, T., LLNL

    1998-04-01

    Data warehouses created for dynamic scientific environments, such as genetics, face significant challenges to their long-term feasibility One of the most significant of these is the high frequency of schema evolution resulting from both technological advances and scientific insight Failure to quickly incorporate these modifications will quickly render the warehouse obsolete, yet each evolution requires significant effort to ensure the changes are correctly propagated DataFoundry utilizes a mediated warehouse architecture with an ontology infrastructure to reduce the maintenance acquirements of a warehouse. Among the things, the ontology is used as an information source for automatically generating mediators, the methods that transfer data between the data sources and the warehouse The identification, definition and representation of the metadata required to perform this task is a primary contribution of this work.

  19. Rotor assembly and method for automatically processing liquids

    DOE Patents [OSTI]

    Burtis, C.A.; Johnson, W.F.; Walker, W.A.

    1992-12-22

    A rotor assembly is described for performing a relatively large number of processing steps upon a sample, such as a whole blood sample, and a diluent, such as water. It includes a rotor body for rotation about an axis and includes a network of chambers within which various processing steps are performed upon the sample and diluent and passageways through which the sample and diluent are transferred. A transfer mechanism is movable through the rotor body by the influence of a magnetic field generated adjacent the transfer mechanism and movable along the rotor body, and the assembly utilizes centrifugal force, a transfer of momentum and capillary action to perform any of a number of processing steps such as separation, aliquoting, transference, washing, reagent addition and mixing of the sample and diluent within the rotor body. The rotor body is particularly suitable for automatic immunoassay analyses. 34 figs.

  20. Rotor assembly and method for automatically processing liquids

    DOE Patents [OSTI]

    Burtis, Carl A.; Johnson, Wayne F.; Walker, William A.

    1992-01-01

    A rotor assembly for performing a relatively large number of processing steps upon a sample, such as a whole blood sample, and a diluent, such as water, includes a rotor body for rotation about an axis and including a network of chambers within which various processing steps are performed upon the sample and diluent and passageways through which the sample and diluent are transferred. A transfer mechanism is movable through the rotor body by the influence of a magnetic field generated adjacent the transfer mechanism and movable along the rotor body, and the assembly utilizes centrifugal force, a transfer of momentum and capillary action to perform any of a number of processing steps such as separation, aliquoting, transference, washing, reagent addition and mixing of the sample and diluent within the rotor body. The rotor body is particularly suitable for automatic immunoassay analyses.

  1. Automatic Microaneurysm Detection and Characterization Through Digital Color Fundus Images

    SciTech Connect (OSTI)

    Martins, Charles; Veras, Rodrigo; Ramalho, Geraldo; Medeiros, Fatima; Ushizima, Daniela

    2008-08-29

    Ocular fundus images can provide information about retinal, ophthalmic, and even systemic diseases such as diabetes. Microaneurysms (MAs) are the earliest sign of Diabetic Retinopathy, a frequently observed complication in both type 1 and type 2 diabetes. Robust detection of MAs in digital color fundus images is critical in the development of automated screening systems for this kind of disease. Automatic grading of these images is being considered by health boards so that the human grading task is reduced. In this paper we describe segmentation and the feature extraction methods for candidate MAs detection.We show that the candidate MAs detected with the methodology have been successfully classified by a MLP neural network (correct classification of 84percent).

  2. Automatic Labeling for Entity Extraction in Cyber Security

    SciTech Connect (OSTI)

    Bridges, Robert A; Jones, Corinne L; Iannacone, Michael D; Testa, Kelly M; Goodall, John R

    2014-01-01

    Timely analysis of cyber-security information necessitates automated information extraction from unstructured text. While state-of-the-art extraction methods produce extremely accurate results, they require ample training data, which is generally unavailable for specialized applications, such as detecting security related entities; moreover, manual annotation of corpora is very costly and often not a viable solution. In response, we develop a very precise method to automatically label text from several data sources by leveraging related, domain-specific, structured data and provide public access to a corpus annotated with cyber-security entities. Next, we implement a Maximum Entropy Model trained with the average perceptron on a portion of our corpus (~750,000 words) and achieve near perfect precision, recall, and accuracy, with training times under 17 seconds.

  3. AUTOMATIC CLASSIFICATION OF VARIABLE STARS IN CATALOGS WITH MISSING DATA

    SciTech Connect (OSTI)

    Pichara, Karim; Protopapas, Pavlos

    2013-11-10

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.

  4. Automatic Monitoring & Control of Polymer Reactions Development and Implementation of an Automatic Continuous Online Monitoring and Control Platform for Polymerization Reactions

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Automatic Monitoring & Control of Polymer Reactions Development and Implementation of an Automatic Continuous Online Monitoring and Control Platform for Polymerization Reactions Enabling energy and resource efficien- cy in polymer manufacturing Polymers, such as plastics, are an important class of chemical compounds composed of many repeated sub-units of monomers. The ability to engineer them to yield a desired set of proper- ties (strength, stiffness, density, heat resistance, electrical

  5. Powered by NERSC, a Database of Billions of Genes and Counting!

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Powered by NERSC, a Database of Billions of Genes and Counting! Powered by NERSC, a Database of Billions of Genes and Counting! With More than a Billion Microbial genes, IMG/M Breaks a Record January 26, 2012 Linda Vu, lvu@lbl.gov, +1 510 495 2402 IMG/M team celebrates the recording of 1 billionth gene. Microbes are microscopic organisms that live in every nook and cranny of our planet. Without them, plants wouldn't grow, garbage wouldn't decay, humans wouldn't digest food, and there would

  6. Apparatus and method for temperature correction and expanded count rate of inorganic scintillation detectors

    DOE Patents [OSTI]

    Ianakiev, Kiril D.; Hsue, Sin Tao; Browne, Michael C.; Audia, Jeffrey M.

    2006-07-25

    The present invention includes an apparatus and corresponding method for temperature correction and count rate expansion of inorganic scintillation detectors. A temperature sensor is attached to an inorganic scintillation detector. The inorganic scintillation detector, due to interaction with incident radiation, creates light pulse signals. A photoreceiver processes the light pulse signals to current signals. Temperature correction circuitry that uses a fast light component signal, a slow light component signal, and the temperature signal from the temperature sensor to corrected an inorganic scintillation detector signal output and expanded the count rate.

  7. ULTRAVIOLET NUMBER COUNTS OF GALAXIES FROM SWIFT ULTRAVIOLET/OPTICAL TELESCOPE DEEP IMAGING OF THE CHANDRA DEEP FIELD SOUTH

    SciTech Connect (OSTI)

    Hoversten, E. A.; Gronwall, C.; Koch, T. S.; Roming, P. W. A.; Siegel, M. H.; Berk, D. E. Vanden; Breeveld, A. A.; Curran, P. A.; Still, M.

    2009-11-10

    Deep Swift UV/Optical Telescope (UVOT) imaging of the Chandra Deep Field South is used to measure galaxy number counts in three near-ultraviolet (NUV) filters (uvw2: 1928 A, uvm2: 2246 A, and uvw1: 2600 A) and the u band (3645 A). UVOT observations cover the break in the slope of the NUV number counts with greater precision than the number counts by the Hubble Space Telescope Space Telescope Imaging Spectrograph and the Galaxy Evolution Explorer, spanning a range 21 approx< m{sub AB} approx< 25. Model number counts confirm earlier investigations in favoring models with an evolving galaxy luminosity function.

  8. Calibration of the Accuscan II In Vivo System for Whole Body Counting

    SciTech Connect (OSTI)

    Orval R. Perry; David L. Georgeson

    2011-08-01

    This report describes the April 2011 calibration of the Accuscan II HpGe In Vivo system for whole body counting. The source used for the calibration was a NIST traceable BOMAB manufactured by DOE as INL2006 BOMAB containing Eu-154, Eu-155, Eu-152, Sb-125 and Y-88 with energies from 27 keV to 1836 keV with a reference date of 11/29/2006. The actual usable energy range was 86.5 keV to 1597 keV on 4/21/2011. The BOMAB was constructed inside the Accuscan II counting 'tub' in the order of legs, thighs, abdomen, thorax/arms, neck, and head. Each piece was taped to the backwall of the counter. The arms were taped to the thorax. The phantom was constructed between the v-ridges on the backwall of the Accuscan II counter. The energy and efficiency calibrations were performed using the INL2006 BOMAB. The calibrations were performed with the detectors in the scanning mode. This report includes an overview introduction and records for the energy/FWHM and efficiency calibration including performance verification and validation counting. The Accuscan II system was successfully calibrated for whole body counting and verified in accordance with ANSI/HPS N13.30-1996 criteria.

  9. Historical review of lung counting efficiencies for low energy photon emitters

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jeffers, Karen L.; Hickman, David P.

    2014-03-01

    This publication reviews the measured efficiency and variability over time of a high purity planar germanium in vivo lung count system for multiple photon energies using increasingly thick overlays with the Lawrence Livermore Torso Phantom. Furthermore, the measured variations in efficiency are compared with the current requirement for in vivo bioassay performance as defined by the American National Standards Institute Standard.

  10. Statistical language analysis for automatic exfiltration event detection.

    SciTech Connect (OSTI)

    Robinson, David Gerald

    2010-04-01

    This paper discusses the recent development a statistical approach for the automatic identification of anomalous network activity that is characteristic of exfiltration events. This approach is based on the language processing method eferred to as latent dirichlet allocation (LDA). Cyber security experts currently depend heavily on a rule-based framework for initial detection of suspect network events. The application of the rule set typically results in an extensive list of uspect network events that are then further explored manually for suspicious activity. The ability to identify anomalous network events is heavily dependent on the experience of the security personnel wading through the network log. Limitations f this approach are clear: rule-based systems only apply to exfiltration behavior that has previously been observed, and experienced cyber security personnel are rare commodities. Since the new methodology is not a discrete rule-based pproach, it is more difficult for an insider to disguise the exfiltration events. A further benefit is that the methodology provides a risk-based approach that can be implemented in a continuous, dynamic or evolutionary fashion. This permits uspect network activity to be identified early with a quantifiable risk associated with decision making when responding to suspicious activity.

  11. Automatic design of 3-D fixtures and assembly pallets

    SciTech Connect (OSTI)

    Brost, R.C.; Peters, R.R.

    1998-12-01

    This paper presents an implemented algorithm that automatically designs fixtures and assembly pallets to hold three-dimensional parts. All fixtures generated by the algorithm employ round side locators, a side clamp, and cylindrical supports; depending on the value of an input-control flag, the fixture may also include swing-arm top clamps. Using these modular elements, the algorithm designs fixtures that rigidly constrain and locate a part, obey task constraints, are robust to part-shape variations, are easy to load, and are economical to produce. For the class of fixtures that are considered, the algorithm is guaranteed to find the global optimum design that satisfies these and other pragmatic conditions. The authors present the results of the algorithm applied to several practical manufacturing problems. For these complex problems, the algorithm typically returns initial high-quality fixture designs in less than a minute, and identifies the global optimum design in just over an hour. The algorithm is also capable of solving difficult design problems where a single fixture is desired that can hold either of two parts.

  12. Automatic control in petroleum, petrochemical and desalination industries

    SciTech Connect (OSTI)

    Kotob, S.

    1986-01-01

    This is the second IFAC workshop on the subject of Automatic Control in Oil and Desalination Industries. Presentations and discussions underscored the priorities of oil and desalination industries in getting better overall quality, improved energy use, lower cost, and better safety and security. These factors will take on added importance to oil exporting nations that have been hit recently by large oil price declines, which are forcing them to improve the efficiency of their industries and rationalize all new capital expenditures. Papers presented at the workshop included reviews of theoretical developments in control and research in modelling, optimization, instrumentation and control. They included the latest developments in applications of control systems to petroleum, petrochemical and desalination industries such as refineries, multi-stage flash desalination, chemical reactors, and bioreactors. The papers covered the latest in the applications of adaptive control, robust control, decentralized control, bilinear control, measurement techniques, plant optimization and maintenance, and artificial intelligence. Several case studies on modernization of refineries and controls and its economics were included. Two panel discussions, on new projects at the Kuwait National Petroleum Company (KNPC) and needs for control systems were held. Participation in the workshop came from the oil industry and academic institutions.

  13. Automatic control and detector for three-terminal resistance measurement

    DOE Patents [OSTI]

    Fasching, George E.

    1976-10-26

    A device is provided for automatic control and detection in a three-terminal resistance measuring instrument. The invention is useful for the rapid measurement of the resistivity of various bulk material with a three-terminal electrode system. The device maintains the current through the sample at a fixed level while measuring the voltage across the sample to detect the sample resistance. The three-electrode system contacts the bulk material and the current through the sample is held constant by means of a control circuit connected to a first of the three electrodes and works in conjunction with a feedback controlled amplifier to null the voltage between the first electrode and a second electrode connected to the controlled amplifier output. An A.C. oscillator provides a source of sinusoidal reference voltage of the frequency at which the measurement is to be executed. Synchronous reference pulses for synchronous detectors in the control circuit and an output detector circuit are provided by a synchronous pulse generator. The output of the controlled amplifier circuit is sampled by an output detector circuit to develop at an output terminal thereof a D.C. voltage which is proportional to the sample resistance R. The sample resistance is that segment of the sample between the area of the first electrode and the third electrode, which is connected to ground potential.

  14. Automatic ball bar for a coordinate measuring machine

    DOE Patents [OSTI]

    Jostlein, H.

    1997-07-15

    An automatic ball bar for a coordinate measuring machine determines the accuracy of a coordinate measuring machine having at least one servo drive. The apparatus comprises a first and second gauge ball connected by a telescoping rigid member. The rigid member includes a switch such that inward radial movement of the second gauge ball relative to the first gauge ball causes activation of the switch. The first gauge ball is secured in a first magnetic socket assembly in order to maintain the first gauge ball at a fixed location with respect to the coordinate measuring machine. A second magnetic socket assembly secures the second gauge ball to the arm or probe holder of the coordinate measuring machine. The second gauge ball is then directed by the coordinate measuring machine to move radially inward from a point just beyond the length of the ball bar until the switch is activated. Upon switch activation, the position of the coordinate measuring machine is determined and compared to known ball bar length such that the accuracy of the coordinate measuring machine can be determined. 5 figs.

  15. Automatic ball bar for a coordinate measuring machine

    DOE Patents [OSTI]

    Jostlein, Hans

    1997-01-01

    An automatic ball bar for a coordinate measuring machine determines the accuracy of a coordinate measuring machine having at least one servo drive. The apparatus comprises a first and second gauge ball connected by a telescoping rigid member. The rigid member includes a switch such that inward radial movement of the second gauge ball relative to the first gauge ball causes activation of the switch. The first gauge ball is secured in a first magnetic socket assembly in order to maintain the first gauge ball at a fixed location with respect to the coordinate measuring machine. A second magnetic socket assembly secures the second gauge ball to the arm or probe holder of the coordinate measuring machine. The second gauge ball is then directed by the coordinate measuring machine to move radially inward from a point just beyond the length of the ball bar until the switch is activated. Upon switch activation, the position of the coordinate measuring machine is determined and compared to known ball bar length such that the accuracy of the coordinate measuring machine can be determined.

  16. Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data

    SciTech Connect (OSTI)

    Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Wu, Kesheng; Prabhat,; Weber, Gunther H.; Ushizima, Daniela M.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2009-10-19

    Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps, then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.

  17. Automatic thermocouple positioner for use in vacuum furnaces

    DOE Patents [OSTI]

    Mee, D.K.; Stephens, A.E.

    1980-06-06

    The invention is a simple and reliable mechanical arrangement for automatically positioning a thermocouple-carrying rod in a vacuum-furnace assembly of the kind including a casing, a furnace mounted in the casing, and a charge-containing crucible mounted in the furnace for vertical movement between a lower (loading) position and a raised (charge-melting) position. In a preferred embodiment, a welded-diaphragm metal bellows is mounted above the furnace, the upper end of the bellows being fixed against movement and the lower end of the bellows being affixed to support means for a thermocouple-carrying rod which is vertically oriented and extends freely through the furnace lid toward the mouth of the crucible. The support means and rod are mounted for relative vertical movement. Before pumpdown of the furnace, the differential pressure acting on the bellows causes it to contract and lift the thermocouple rod to a position where it will not be contacted by the crucible charge when the crucible is elevated to its raised position. During pumpdown, the bellows expands downward, lowering the thermocouple rod and its support. The bellows expands downward beyond a point where downward movement of the thermocouple rod is arrested by contact with the crucible charge and to a point where the upper end of the thermocouple extends well above the thermocouple support. During subsequent melting of the charge, the thermocouple sinks into the melt to provide an accurate measurement of melt temperatures.

  18. Automatic thermocouple positioner for use in vacuum furnaces

    DOE Patents [OSTI]

    Mee, David K.; Stephens, Albert E.

    1981-01-01

    The invention is a simple and reliable mechanical arrangement for automatically positioning a thermocouple-carrying rod in a vacuum-furnace assembly of the kind including a casing, a furnace mounted in the casing, and a charge-containing crucible mounted in the furnace for vertical movement between a lower (loading) position and a raised (charge-melting) position. In a preferred embodiment, a welded-diaphragm metal bellows is mounted above the furnace, the upper end of the bellows being fixed against movement and the lower end of the bellows being affixed to support means for a thermocouple-carrying rod which is vertically oriented and extends freely through the furnace lid toward the mouth of the crucible. The support means and rod are mounted for relative vertical movement. Before pumpdown of the furnace, the differential pressure acting on the bellows causes it to contract and lift the thermocouple rod to a position where it will not be contacted by the crucible charge when the crucible is elevated to its raised position. During pumpdown, the bellows expands downward, lowering the thermocouple rod and its support. The bellows expands downward beyond a point where downward movement of the thermocouple rod is arrested by contact with the crucible charge and to a point where the upper end of the thermocouple extends well above the thermocouple support. During subsequent melting of the charge, the thermocouple sinks into the melt to provide an accurate measurement of melt temperatures.

  19. Computer controlled synchronous shifting of an automatic transmission

    DOE Patents [OSTI]

    Davis, Roy I.; Patil, Prabhakar B.

    1989-01-01

    A multiple forward speed automatic transmission produces its lowest forward speed ratio when a hydraulic clutch and hydraulic brake are disengaged and a one-way clutch connects a ring gear to the transmission casing. Second forward speed ratio results when the hydraulic clutch is engaged to connect the ring gear to the planetary carrier of a second gear set. Reverse drive and regenerative operation result when an hydraulic brake fixes the planetary and the direction of power flow is reversed. Various sensors produce signals representing the torque at the output of the transmission or drive wheels, the speed of the power source, and the hydraulic pressure applied to a clutch and brake. A control algorithm produces input data representing a commanded upshift, a commanded downshift, a commanded transmission output torque, and commanded power source speed. A microprocessor processes the inputs and produces a response to them in accordance with the execution of a control algorithm. Output or response signals cause selective engagement and disengagement of the clutch and brake at a rate that satisfies the requirements for a short gear ratio change and smooth torque transfer between the friction elements.

  20. Computer controllable synchronous shifting of an automatic transmission

    DOE Patents [OSTI]

    Davis, R.I.; Patil, P.B.

    1989-08-08

    A multiple forward speed automatic transmission produces its lowest forward speed ratio when a hydraulic clutch and hydraulic brake are disengaged and a one-way clutch connects a ring gear to the transmission casing. Second forward speed ratio results when the hydraulic clutch is engaged to connect the ring gear to the planetary carrier of a second gear set. Reverse drive and regenerative operation result when an hydraulic brake fixes the planetary and the direction of power flow is reversed. Various sensors produce signals representing the torque at the output of the transmission or drive wheels, the speed of the power source, and the hydraulic pressure applied to a clutch and brake. A control algorithm produces input data representing a commanded upshift, a commanded downshift, a commanded transmission output torque, and commanded power source speed. A microprocessor processes the inputs and produces a response to them in accordance with the execution of a control algorithm. Output or response signals cause selective engagement and disengagement of the clutch and brake at a rate that satisfies the requirements for a short gear ratio change and smooth torque transfer between the friction elements. 6 figs.

  1. Evaluation of Automatic Atlas-Based Lymph Node Segmentation for Head-and-Neck Cancer

    SciTech Connect (OSTI)

    Stapleford, Liza J.; Lawson, Joshua D.; Perkins, Charles; Edelman, Scott; Davis, Lawrence

    2010-07-01

    Purpose: To evaluate if automatic atlas-based lymph node segmentation (LNS) improves efficiency and decreases inter-observer variability while maintaining accuracy. Methods and Materials: Five physicians with head-and-neck IMRT experience used computed tomography (CT) data from 5 patients to create bilateral neck clinical target volumes covering specified nodal levels. A second contour set was automatically generated using a commercially available atlas. Physicians modified the automatic contours to make them acceptable for treatment planning. To assess contour variability, the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm was used to take collections of contours and calculate a probabilistic estimate of the 'true' segmentation. Differences between the manual, automatic, and automatic-modified (AM) contours were analyzed using multiple metrics. Results: Compared with the 'true' segmentation created from manual contours, the automatic contours had a high degree of accuracy, with sensitivity, Dice similarity coefficient, and mean/max surface disagreement values comparable to the average manual contour (86%, 76%, 3.3/17.4 mm automatic vs. 73%, 79%, 2.8/17 mm manual). The AM group was more consistent than the manual group for multiple metrics, most notably reducing the range of contour volume (106-430 mL manual vs. 176-347 mL AM) and percent false positivity (1-37% manual vs. 1-7% AM). Average contouring time savings with the automatic segmentation was 11.5 min per patient, a 35% reduction. Conclusions: Using the STAPLE algorithm to generate 'true' contours from multiple physician contours, we demonstrated that, in comparison with manual segmentation, atlas-based automatic LNS for head-and-neck cancer is accurate, efficient, and reduces interobserver variability.

  2. A New Technique for Studying the Fano Factor And the Mean Energy Per Ion Pair in Counting Gases

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Panksky, A.; Breskin, A.; Chechik, R.

    1996-04-01

    A new method is presented for deriving the Fano factor and the mean energy per ion pair in the ultrasoft x-ray energy range. It is based on counting electrons deposited by a photon in a low-pressure gas, and is applicable for all counting gases. The energy dependence of these parameters for several hydrocarbons and gas mixtures is presented.

  3. Resiliency: Planning Ahead for Disasters - Continuum Magazine...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    operating standards require grid-connected solar PV systems automatically disconnect from the grid during a power outage to protect utility workers and grid integrity on restart. ...

  4. Automatic CT simulation optimization for radiation therapy: A general strategy

    SciTech Connect (OSTI)

    Li, Hua Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M.; Mutic, Sasa; Yu, Lifeng; Anastasio, Mark A.; Low, Daniel A.

    2014-03-15

    Purpose: In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. Methods: The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Results: Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube

  5. Wedge sampling for computing clustering coefficients and triangle counts on large graphs

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Seshadhri, C.; Pinar, Ali; Kolda, Tamara G.

    2014-05-08

    Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of such graphs. Some of the most useful graph metrics are based on triangles, such as those measuring social cohesion. Despite the importance of these triadic measures, algorithms to compute them can be extremely expensive. We discuss the method of wedge sampling. This versatile technique allows for the fast and accurate approximation of various types of clustering coefficients and triangle counts. Furthermore, these techniques are extensible to counting directed triangles in digraphs. Our methods come with provable andmore » practical time-approximation tradeoffs for all computations. We provide extensive results that show our methods are orders of magnitude faster than the state of the art, while providing nearly the accuracy of full enumeration.« less

  6. ATCOM: Automatically Tuned Collective Communication System for SMP Clusters

    SciTech Connect (OSTI)

    Meng-Shiou Wu

    2005-12-17

    Conventional implementations of collective communications are based on point-to-point communications, and their optimizations have been focused on efficiency of those communication algorithms. However, point-to-point communications are not the optimal choice for modern computing clusters of SMPs due to their two-level communication structure. In recent years, a few research efforts have investigated efficient collective communications for SMP clusters. This dissertation is focused on platform-independent algorithms and implementations in this area. There are two main approaches to implementing efficient collective communications for clusters of SMPs: using shared memory operations for intra-node communications, and overlapping inter-node/intra-node communications. The former fully utilizes the hardware based shared memory of an SMP, and the latter takes advantage of the inherent hierarchy of the communications within a cluster of SMPs. Previous studies focused on clusters of SMP from certain vendors. However, the previously proposed methods are not portable to other systems. Because the performance optimization issue is very complicated and the developing process is very time consuming, it is highly desired to have self-tuning, platform-independent implementations. As proven in this dissertation, such an implementation can significantly out-perform the other point-to-point based portable implementations and some platform-specific implementations. The dissertation describes in detail the architecture of the platform-independent implementation. There are four system components: shared memory-based collective communications, overlapping mechanisms for inter-node and intra-node communications, a prediction-based tuning module and a micro-benchmark based tuning module. Each component is carefully designed with the goal of automatic tuning in mind.

  7. System for computer controlled shifting of an automatic transmission

    DOE Patents [OSTI]

    Patil, Prabhakar B.

    1989-01-01

    In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determine from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

  8. Closed loop computer control for an automatic transmission

    DOE Patents [OSTI]

    Patil, Prabhakar B.

    1989-01-01

    In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determined from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

  9. $598,890 raised for northern New Mexico students, and counting...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    $598,890 raised Community Connections: Your link to news and opportunities from Los Alamos National Laboratory Latest Issue: September 1, 2016 all issues All Issues » submit $598,890 raised for northern New Mexico students, and counting... LAESF donations give scholars of all income levels access to higher education July 1, 2015 Celebrating a new record-breaking total for the Los Alamos Employees' Scholarship Fund are, from left, scholarship program chair Steven Girrens (Associate Director for

  10. Improving Neutron Measurement Capabilities; Expanding the Limits of Correlated Neutron Counting

    SciTech Connect (OSTI)

    Santi, Peter Angelo; Geist, William H.; Dougan, Arden

    2015-11-05

    A number of technical and practical limitations exist within the neutron correlated counting techniques used in safeguards, especially within the algorithms that are used to process and analyze the detected neutron signals. A multi-laboratory effort is underway to develop new and improved analysis and data processing algorithms based on fundamental physics principles to extract additional or more accurate information about nuclear material bearing items.

  11. Counts-in-Cylinders in the Sloan Digital Sky Survey with Comparisons to N-Body

    SciTech Connect (OSTI)

    Berrier, Heather D.; Barton, Elizabeth J.; Berrier, Joel C.; Bullock, James S.; Zentner, Andrew R.; Wechsler, Risa H. /KIPAC, Menlo Park /SLAC

    2010-12-16

    Environmental statistics provide a necessary means of comparing the properties of galaxies in different environments and a vital test of models of galaxy formation within the prevailing, hierarchical cosmological model. We explore counts-in-cylinders, a common statistic defined as the number of companions of a particular galaxy found within a given projected radius and redshift interval. Galaxy distributions with the same two-point correlation functions do not necessarily have the same companion count distributions. We use this statistic to examine the environments of galaxies in the Sloan Digital Sky Survey, Data Release 4. We also make preliminary comparisons to four models for the spatial distributions of galaxies, based on N-body simulations, and data from SDSS DR4 to study the utility of the counts-in-cylinders statistic. There is a very large scatter between the number of companions a galaxy has and the mass of its parent dark matter halo and the halo occupation, limiting the utility of this statistic for certain kinds of environmental studies. We also show that prevalent, empirical models of galaxy clustering that match observed two- and three-point clustering statistics well fail to reproduce some aspects of the observed distribution of counts-in-cylinders on 1, 3 and 6-h{sup -1}Mpc scales. All models that we explore underpredict the fraction of galaxies with few or no companions in 3 and 6-h{sup -1} Mpc cylinders. Roughly 7% of galaxies in the real universe are significantly more isolated within a 6 h{sup -1} Mpc cylinder than the galaxies in any of the models we use. Simple, phenomenological models that map galaxies to dark matter halos fail to reproduce high-order clustering statistics in low-density environments.

  12. Particle count monitoring of reverse osmosis water treatment for removal of low-level radionuclides

    SciTech Connect (OSTI)

    Moritz, E.J.; Hoffman, C.R.; Hergert, T.R.

    1995-03-01

    Laser diode particle counting technology and analytical measurements were used to evaluate a pilot-scale reverse osmosis (RO) water treatment system for removal of particulate matter and sub-picocurie low-level radionuclides. Stormwater mixed with Waste Water Treatment Plant (WWTP) effluent from the Rocky Flats Environmental Technology Site (RFETS), formerly a Department of Energy (DOE) nuclear weapons production facility, were treated. No chemical pretreatment of the water was utilized during this study. The treatment system was staged as follows: multimedia filtration, granular activated carbon adsorption, hollow tube ultrafiltration, and reverse osmosis membrane filtration. Various recovery rates and two RO membrane models were tested. Analytical measurements included total suspended solids (TSS), total dissolved solids (TDS), gross alpha ({alpha}) and gross beta ({beta}) activity, uranium isotopes {sup 233/234}U and {sup 238}U, plutonium {sup 239/240}Pu, and americium {sup 241}Am. Particle measurement between 1--150 microns ({mu}) included differential particle counts (DPC), and total particle counts (TPC) before and after treatment at various sampling points throughout the test. Performance testing showed this treatment system produced a high quality effluent in clarity and purity. Compared to raw water levels, TSS was reduced to below detection of 5 milligrams per liter (mg/L) and TDS reduced by 98%. Gross {alpha} was essentially removed 100%, and gross {beta} was reduced an average of 94%. Uranium activity was reduced by 99%. TPC between 1-150{mu} were reduced by an average 99.8% to less than 1,000 counts per milliliter (mL), similar in purity to a good drinking water treatment plant. Raw water levels of {sup 239/240}Pu and {sup 241}Am were below reliable quantitation limits and thus no removal efficiencies could be determined for these species.

  13. Cosmology Constraints from the Weak Lensing Peak Counts and the Power Spectrum in CFHTLenS

    SciTech Connect (OSTI)

    Liu, Jia; May, Morgan; Petri, Andrea; Haiman, Zoltan; Hui, Lam; Kratochvil, Jan M.

    2015-03-04

    Lensing peaks have been proposed as a useful statistic, containing cosmological information from non-Gaussianities that is inaccessible from traditional two-point statistics such as the power spectrum or two-point correlation functions. Here we examine constraints on cosmological parameters from weak lensing peak counts, using the publicly available data from the 154 deg2 CFHTLenS survey. We utilize a new suite of ray-tracing N-body simulations on a grid of 91 cosmological models, covering broad ranges of the three parameters Ωm, σ8, and w, and replicating the galaxy sky positions, redshifts, and shape noise in the CFHTLenS observations. We then build an emulator that interpolates the power spectrum and the peak counts to an accuracy of ≤ 5%, and compute the likelihood in the three-dimensional parameter space (Ωm, σ8, w) from both observables. We find that constraints from peak counts are comparable to those from the power spectrum, and somewhat tighter when different smoothing scales are combined. Neither observable can constrain w without external data. When the power spectrum and peak counts are combined, the area of the error “banana” in the (Ωm, σ8) plane reduces by a factor of ≈ two, compared to using the power spectrum alone. For a flat Λ cold dark matter model, combining both statistics, we obtain the constraint σ8m/0.27)0.63 = 0.85+0.03-0.03.

  14. Cosmology Constraints from the Weak Lensing Peak Counts and the Power Spectrum in CFHTLenS

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Liu, Jia; May, Morgan; Petri, Andrea; Haiman, Zoltan; Hui, Lam; Kratochvil, Jan M.

    2015-03-04

    Lensing peaks have been proposed as a useful statistic, containing cosmological information from non-Gaussianities that is inaccessible from traditional two-point statistics such as the power spectrum or two-point correlation functions. Here we examine constraints on cosmological parameters from weak lensing peak counts, using the publicly available data from the 154 deg2 CFHTLenS survey. We utilize a new suite of ray-tracing N-body simulations on a grid of 91 cosmological models, covering broad ranges of the three parameters Ωm, σ8, and w, and replicating the galaxy sky positions, redshifts, and shape noise in the CFHTLenS observations. We then build an emulator thatmore » interpolates the power spectrum and the peak counts to an accuracy of ≤ 5%, and compute the likelihood in the three-dimensional parameter space (Ωm, σ8, w) from both observables. We find that constraints from peak counts are comparable to those from the power spectrum, and somewhat tighter when different smoothing scales are combined. Neither observable can constrain w without external data. When the power spectrum and peak counts are combined, the area of the error “banana” in the (Ωm, σ8) plane reduces by a factor of ≈ two, compared to using the power spectrum alone. For a flat Λ cold dark matter model, combining both statistics, we obtain the constraint σ8(Ωm/0.27)0.63 = 0.85+0.03-0.03.« less

  15. Webinar: Energy Conservation Standards for Automatic Commercial Ice Makers; Notice of Public Meeting

    Broader source: Energy.gov [DOE]

    DOE is conducting a public meeting and webinar for the notice of public meeting regarding energy conservation standards for automatic commercial ice makers. For more information, please visit the...

  16. National Ignition Facility sub-system design requirements automatic alignment system SSDR 1.5.5

    SciTech Connect (OSTI)

    VanArsdall, P.; Bliss, E.

    1996-09-01

    This System Design Requirement document establishes the performance, design, development, and test requirements for the Automatic Alignment System, which is part of the NIF Integrated Computer Control System (ICCS).

  17. Calibration of the Accuscan II In Vivo System for I-125 Thyroid Counting

    SciTech Connect (OSTI)

    Ovard R. Perry; David L. Georgeson

    2011-07-01

    This report describes the March 2011 calibration of the Accuscan II HpGe In Vivo system for I-125 thyroid counting. The source used for the calibration was a DOE manufactured Am-241/Eu-152 source contained in a 22 ml vial BEA Am-241/Eu-152 RMC II-1 with energies from 26 keV to 344 keV. The center of the detector housing was positioned 64 inches from the vault floor. This position places the approximate center line of the detector housing at the center line of the source in the phantom thyroid tube. The energy and efficiency calibration were performed using an RMC II phantom (Appendix J). Performance testing was conducted using source BEA Am-241/Eu-152 RMC II-1 and Validation testing was performed using an I-125 source in a 30 ml vial (I-125 BEA Thyroid 002) and an ANSI N44.3 phantom (Appendix I). This report includes an overview introduction and records for the energy/FWHM and efficiency calibration including performance verification and validation counting. The Accuscan II system was successfully calibrated for counting the thyroid for I-125 and verified in accordance with ANSI/HPS N13.30-1996 criteria.

  18. Calibration of the Accuscan II In Vivo System for I-131 Thyroid Counting

    SciTech Connect (OSTI)

    Orval R. Perry; David L. Georgeson

    2011-07-01

    This report describes the March 2011 calibration of the Accuscan II HpGe In Vivo system for I-131 thyroid counting. The source used for the calibration was an Analytics mixed gamma source 82834-121 distributed in an epoxy matrix in a Wheaton Liquid Scintillation Vial with energies from 88.0 keV to 1836.1 keV. The center of the detectors was position 64-feet from the vault floor. This position places the approximate center line of the detectors at the center line of the source in the thyroid tube. The calibration was performed using an RMC II phantom (Appendix J). Validation testing was performed using a Ba-133 source and an ANSI N44.3 Phantom (Appendix I). This report includes an overview introduction and records for the energy/FWHM and efficiency calibrations including verification counting. The Accuscan II system was successfully calibrated for counting the thyroid for I-131 and verified in accordance with ANSI/HPS N13.30-1996 criteria.

  19. The effect of deadtime and electronic transients on the predelay bias in neutron coincidence counting

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Croft, Stephen; Favalli, Andrea; Swinhoe, Martyn T.; Goddard, Braden; Stewart, Scott

    2016-01-13

    In neutron coincidence counting using the shift register autocorrelation technique, a predelay is inserted before the opening of the (R+A)-gate. Operationally the purpose of the predelay is to ensure that the (R+A)- and A-gates have matched effectiveness, otherwise a bias will result when the difference between the gates is used to calculate the accidentals corrected net reals coincidence rate. The necessity for the predelay was established experimentally in the early practical development and deployment of the coincidence counting method. The choice of predelay for a given detection system is usually made experimentally, but even today long standing traditional values (e.g.,more » 4.5 µs) are often used. This, at least in part, reflects the fact that a deep understanding of why a finite predelay setting is needed and how to control the underlying influences has not been fully worked out. We attempt, in this paper, to gain some insight into the problem. One aspect we consider is the slowing down, thermalization, and diffusion of neutrons in the detector moderator. The other is the influence of deadtime and electronic transients. These may be classified as non-ideal detector behaviors because they are not included in the conventional model used to interpret measurement data. From improved understanding of the effect of deadtime and electronic transients on the predelay bias in neutron coincidence counting, the performance of both future and current coincidence counters may be improved.« less

  20. Characterization of energy response for photon-counting detectors using x-ray fluorescence

    SciTech Connect (OSTI)

    Ding, Huanjun; Cho, Hyo-Min; Molloi, Sabee; Barber, William C.; Iwanczyk, Jan S.

    2014-12-15

    Purpose: To investigate the feasibility of characterizing a Si strip photon-counting detector using x-ray fluorescence. Methods: X-ray fluorescence was generated by using a pencil beam from a tungsten anode x-ray tube with 2 mm Al filtration. Spectra were acquired at 90 from the primary beam direction with an energy-resolved photon-counting detector based on an edge illuminated Si strip detector. The distances from the source to target and the target to detector were approximately 19 and 11 cm, respectively. Four different materials, containing silver (Ag), iodine (I), barium (Ba), and gadolinium (Gd), were placed in small plastic containers with a diameter of approximately 0.7 cm for x-ray fluorescence measurements. Linear regression analysis was performed to derive the gain and offset values for the correlation between the measured fluorescence peak center and the known fluorescence energies. The energy resolutions and charge-sharing fractions were also obtained from analytical fittings of the recorded fluorescence spectra. An analytical model, which employed four parameters that can be determined from the fluorescence calibration, was used to estimate the detector response function. Results: Strong fluorescence signals of all four target materials were recorded with the investigated geometry for the Si strip detector. The average gain and offset of all pixels for detector energy calibration were determined to be 6.95 mV/keV and ?66.33 mV, respectively. The detectors energy resolution remained at approximately 2.7 keV for low energies, and increased slightly at 45 keV. The average charge-sharing fraction was estimated to be 36% within the investigated energy range of 2045 keV. The simulated detector output based on the proposed response function agreed well with the experimental measurement. Conclusions: The performance of a spectral imaging system using energy-resolved photon-counting detectors is very dependent on the energy calibration of the detector. The

  1. The number counts and infrared backgrounds from infrared-bright galaxies

    SciTech Connect (OSTI)

    Hacking, P.B.; Soifer, B.T. California Institute of Technology, Pasadena )

    1991-02-01

    Extragalactic number counts and diffuse backgrounds at 25, 60, and 100 microns are predicted using new luminosity functions and improved spectral-energy distribution density functions derived from IRAS observations of nearby galaxies. Galaxies at redshifts z less than 3 that are like those in the local universe should produce a minimum diffuse background of 0.0085, 0.038, and 0.13 MJy/sr at 25, 60, and 100 microns, respectively. Models with significant luminosity evolution predict backgrounds about a factor of 4 greater than this minimum. 22 refs.

  2. Laboratory adds a sixth R&D 100 award to its 2009 count

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    R&D 100 awards Laboratory adds a sixth R&D 100 award to its 2009 count This year's awards bring the Los Alamos total to 113 since the Laboratory first entered the competition in 1978. November 4, 2009 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy sources, to plasma physics and new materials. Los Alamos National

  3. Longitudinal Bunch Pattern Measurements through Single Photon Counting at SPEAR3

    SciTech Connect (OSTI)

    Wang, Hongyi; /UC, San Diego

    2012-09-07

    The Stanford Synchrotron Radiation Lightsource (SSRL), a division of SLAC National Accelerator Laboratory, is a synchrotron light source that provides x-rays for experimental use. As electrons are bent in the storage ring, they emit electromagnetic radiation. There are 372 different buckets which electrons can be loaded into. Different filling patterns produce different types of x-rays. What is the bunch pattern at a given time? Which filling pattern is better? Are there any flaws to the current injection system? These questions can be answered with this single photon counting experiment.

  4. Method for automatically evaluating a transition from a batch manufacturing technique to a lean manufacturing technique

    DOE Patents [OSTI]

    Ivezic, Nenad; Potok, Thomas E.

    2003-09-30

    A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

  5. Sequential data assimilation for single-molecule FRET photon-counting data

    SciTech Connect (OSTI)

    Matsunaga, Yasuhiro; Kidera, Akinori; Sugita, Yuji

    2015-06-07

    Data assimilation is a statistical method designed to improve the quality of numerical simulations in combination with real observations. Here, we develop a sequential data assimilation method that incorporates one-dimensional time-series data of smFRET (single-molecule Frster resonance energy transfer) photon-counting into conformational ensembles of biomolecules derived from replicated molecular dynamics (MD) simulations. A particle filter using a large number of replicated MD simulations with a likelihood function for smFRET photon-counting data is employed to screen the conformational ensembles that match the experimental data. We examine the performance of the method using emulated smFRET data and coarse-grained (CG) MD simulations of a dye-labeled polyproline-20. The method estimates the dynamics of the end-to-end distance from smFRET data as well as revealing that of latent conformational variables. The particle filter is also able to correct model parameter dependence in CG MD simulations. We discuss the applicability of the method to real experimental data for conformational dynamics of biomolecules.

  6. Liquid scintillation counting methodology for 99Tc analysis. A remedy for radiopharmaceutical waste

    SciTech Connect (OSTI)

    Khan, Mumtaz; Um, Wooyong

    2015-08-13

    This paper presents a new approach for liquid scintillation counting (LSC) analysis of single-radionuclide samples containing appreciable organic or inorganic quench. This work offers better analytical results than existing LSC methods for technetium-99 (99gTc) analysis with significant savings in analysis cost and time. The method was developed to quantify 99gTc in environmental liquid and urine samples using LSC. Method efficiency was measured in the presence of 1.9 to 11,900 ppm total dissolved solids. The quench curve was proved to be effective in the case of spiked 99gTc activity calculation for deionized water, tap water, groundwater, seawater, and urine samples. Counting efficiency was found to be 91.66% for Ultima Gold LLT (ULG-LLT) and Ultima Gold (ULG). Relative error in spiked 99gTc samples was ±3.98% in ULG and ULG-LLT cocktails. Minimum detectable activity was determined to be 25.3 mBq and 22.7 mBq for ULG-LLT and ULG cocktails, respectively. A pre-concentration factor of 1000 was achieved at 100°C for 100% chemical recovery.

  7. Fact #850: December 8, 2014 Automatic Transmissions have closed the Fuel Economy Gap with Manual Transmissions- Dataset

    Broader source: Energy.gov [DOE]

    Excel file with dataset for Fact #850: December 8, 2014 Automatic Transmissions have closed the Fuel Economy Gap with Manual Transmissions

  8. An automatic contact algorithm in DYNA3D for impact problems

    SciTech Connect (OSTI)

    Whirley, R.G.; Engelmann, B.E.

    1993-07-23

    This paper presents a new approach for the automatic definition and treatment of mechanical contact in explicit nonlinear finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. Key aspects of the proposed new method include automatic identification of adjacent and opposite surfaces in the global search phase, and the use of a well-defined surface normal which allows a consistent treatment of shell intersection and corner contact conditions without a ad-hoc rules. The paper concludes with three examples which illustrate the performance of the newly proposed algorithm in the public DYNA3D code.

  9. Association Between White Blood Cell Count Following Radiation Therapy With Radiation Pneumonitis in Non-Small Cell Lung Cancer

    SciTech Connect (OSTI)

    Tang, Chad; Gomez, Daniel R. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Wang, Hongmei [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou (China); Levy, Lawrence B. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Zhuang, Yan [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Xu, Ting; Nguyen, Quynh; Komaki, Ritsuko [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Liao, Zhongxing, E-mail: zliao@mdanderson.org [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2014-02-01

    Purpose: Radiation pneumonitis (RP) is an inflammatory response to radiation therapy (RT). We assessed the association between RP and white blood cell (WBC) count, an established metric of systemic inflammation, after RT for non-small cell lung cancer. Methods and Materials: We retrospectively analyzed 366 patients with non-small cell lung cancer who received ?60 Gy as definitive therapy. The primary endpoint was whether WBC count after RT (defined as 2 weeks through 3 months after RT completion) was associated with grade ?3 or grade ?2 RP. Median lung volume receiving ?20 Gy (V{sub 20}) was 31%, and post-RT WBC counts ranged from 1.7 to 21.2 10{sup 3} WBCs/?L. Odds ratios (ORs) associating clinical variables and post-RT WBC counts with RP were calculated via logistic regression. A recursive-partitioning algorithm was used to define optimal post-RT WBC count cut points. Results: Post-RT WBC counts were significantly higher in patients with grade ?3 RP than without (P<.05). Optimal cut points for post-RT WBC count were found to be 7.4 and 8.0 10{sup 3}/?L for grade ?3 and ?2 RP, respectively. Univariate analysis revealed significant associations between post-RT WBC count and grade ?3 (n=46, OR=2.6, 95% confidence interval [CI] 1.4?4.9, P=.003) and grade ?2 RP (n=164, OR=2.0, 95% CI 1.2?3.4, P=.01). This association held in a stepwise multivariate regression. Of note, V{sub 20} was found to be significantly associated with grade ?2 RP (OR=2.2, 95% CI 1.2?3.4, P=.01) and trended toward significance for grade ?3 RP (OR=1.9, 95% CI 1.0-3.5, P=.06). Conclusions: Post-RT WBC counts were significantly and independently associated with RP and have potential utility as a diagnostic or predictive marker for this toxicity.

  10. Analysis of 161Tb by radiochemical separation and liquid scintillation counting

    SciTech Connect (OSTI)

    Jiang, J.; Davies, A.; Arrigo, L.; Friese, J.; Seiner, B. N.; Greenwood, L.; Finch, Z.

    2015-12-05

    The determination of 161Tb activity is problematic due to its very low fission yield, short half-life, and the complication of its gamma spectrum. At AWE, radiochemically purified 161Tb solution was measured on a PerkinElmer 1220 QuantulusTM Liquid Scintillation Spectrometer. Since there was no 161Tb certified standard solution available commercially, the counting efficiency was determined by the CIEMAT/NIST Efficiency Tracing method. The method was validated during a recent inter-laboratory comparison exercise involving the analysis of a uranium sample irradiated with thermal neutrons. Lastly, the measured 161Tb result was in excellent agreement with the result using gamma spectrometry and the result obtained by Pacific Northwest National Laboratory.

  11. Analysis of 161Tb by radiochemical separation and liquid scintillation counting

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jiang, J.; Davies, A.; Arrigo, L.; Friese, J.; Seiner, B. N.; Greenwood, L.; Finch, Z.

    2015-12-05

    The determination of 161Tb activity is problematic due to its very low fission yield, short half-life, and the complication of its gamma spectrum. At AWE, radiochemically purified 161Tb solution was measured on a PerkinElmer 1220 QuantulusTM Liquid Scintillation Spectrometer. Since there was no 161Tb certified standard solution available commercially, the counting efficiency was determined by the CIEMAT/NIST Efficiency Tracing method. The method was validated during a recent inter-laboratory comparison exercise involving the analysis of a uranium sample irradiated with thermal neutrons. Lastly, the measured 161Tb result was in excellent agreement with the result using gamma spectrometry and the result obtainedmore » by Pacific Northwest National Laboratory.« less

  12. Full counting statistics of energy fluctuations in a driven quantum resonator

    SciTech Connect (OSTI)

    Clerk, A. A.

    2011-10-15

    We consider the statistics of time-integrated energy fluctuations of a driven bosonic single-mode resonator, as measured by a quantum nondemolition (QND) detector, using the standard Keldysh prescription to define higher moments. We find that, due to an effective cascading of fluctuations, these statistics are surprisingly nonclassical: the low-temperature, quantum probability distribution is not equivalent to the high-temperature classical distribution evaluated at some effective temperature. Moreover, for a sufficiently large drive detuning and low temperatures, the Keldysh-ordered quasiprobability distribution characterizing these fluctuations fails to be positive-definite; this is similar to the full counting statistics of charge in superconducting systems. We argue that this indicates a kind of nonclassical behavior akin to that tested by Leggett-Garg inequalities.

  13. On the single-photon-counting (SPC) modes of imaging using an XFEL source

    SciTech Connect (OSTI)

    Wang, Zhehui

    2015-12-14

    In this study, the requirements to achieve high detection efficiency (above 50%) and gigahertz (GHz) frame rate for the proposed 42-keV X-ray free-electron laser (XFEL) at Los Alamos are summarized. Direct detection scenarios using C (diamond), Si, Ge and GaAs semiconductor sensors are analyzed. Single-photon counting (SPC) mode and weak SPC mode using Si can potentially meet the efficiency and frame rate requirements and be useful to both photoelectric absorption and Compton physics as the photon energy increases. Multilayer three-dimensional (3D) detector architecture, as a possible means to realize SPC modes, is compared with the widely used two-dimensional (2D) hybrid planar electrode structure and 3D deeply entrenched electrode architecture. Demonstration of thin film cameras less than 100-μm thick with onboard thin ASICs could be an initial step to realize multilayer 3D detectors and SPC modes for XFELs.

  14. Measurement of uranium and plutonium in solid waste by passive photon or neutron counting and isotopic neutron source interrogation

    SciTech Connect (OSTI)

    Crane, T.W.

    1980-03-01

    A summary of the status and applicability of nondestructive assay (NDA) techniques for the measurement of uranium and plutonium in 55-gal barrels of solid waste is reported. The NDA techniques reviewed include passive gamma-ray and x-ray counting with scintillator, solid state, and proportional gas photon detectors, passive neutron counting, and active neutron interrogation with neutron and gamma-ray counting. The active neutron interrogation methods are limited to those employing isotopic neutron sources. Three generic neutron sources (alpha-n, photoneutron, and /sup 252/Cf) are considered. The neutron detectors reviewed for both prompt and delayed fission neutron detection with the above sources include thermal (/sup 3/He, /sup 10/BF/sub 3/) and recoil (/sup 4/He, CH/sub 4/) proportional gas detectors and liquid and plastic scintillator detectors. The instrument found to be best suited for low-level measurements (< 10 nCi/g) is the /sup 252/Cf Shuffler. The measurement technique consists of passive neutron counting followed by cyclic activation using a /sup 252/Cf source and delayed neutron counting with the source withdrawn. It is recommended that a waste assay station composed of a /sup 252/Cf Shuffler, a gamma-ray scanner, and a screening station be tested and evaluated at a nuclear waste site. 34 figures, 15 tables.

  15. Task-based weights for photon counting spectral x-ray imaging

    SciTech Connect (OSTI)

    Bornefalk, Hans

    2011-11-15

    Purpose: To develop a framework for taking the spatial frequency composition of an imaging task into account when determining optimal bin weight factors for photon counting energy sensitive x-ray systems. A second purpose of the investigation is to evaluate the possible improvement compared to using pixel based weights. Methods: The Fourier based approach of imaging performance and detectability index d' is applied to pulse height discriminating photon counting systems. The dependency of d' on the bin weight factors is made explicit, taking into account both differences in signal and noise transfer characteristics across bins and the spatial frequency dependency of interbin correlations from reabsorbed scatter. Using a simplified model of a specific silicon detector, d' values for a high and a low frequency imaging task are determined for optimal weights and compared to pixel based weights. Results: The method successfully identifies bins where a large point spread function degrades detection of high spatial frequency targets. The method is also successful in determining how to downweigh highly correlated bins. Quantitative predictions for the simplified silicon detector model indicate that improvements in the detectability index when applying task-based weights instead of pixel based weights are small for high frequency targets, but could be in excess of 10% for low frequency tasks where scatter-induced correlation otherwise degrade detectability. Conclusions: The proposed method makes the spatial frequency dependency of complex correlation structures between bins and their effect on the system detective quantum efficiency easier to analyze and allows optimizing bin weights for given imaging tasks. A potential increase in detectability of double digit percents in silicon detector systems operated at typical CT energies (100 kVp) merits further evaluation on a real system. The method is noted to be of higher relevance for silicon detectors than for cadmium (zink

  16. EZ reader: Embedded AI for automatic electronic mail interpretation and routing

    SciTech Connect (OSTI)

    Rice, A.; Hus, J.; Angotti, A.; Piccolo, R.

    1996-12-31

    EZ Reader is an intelligent electronic mail (email) reader that employs a unique combination of rule-based parsing and case-based reasoning to automatically and with a high level of accuracy classify and respond to large volumes of incoming email. EZ Reader reduces the time and human resources required to handle incoming email by selecting responses and adding attachments and advice to each incoming message based on how previous similar messages were handled. The application, developed for Chase Manhattan Bank using Brightware, Inc.`s ART* Enterprise{reg_sign} tool, answers emails automatically and decreases processing time for those requiring manual review. Phase I of EZ Reader was deployed in the first quarter of 1996, and handles up to 80% of incoming mail automatically, depending on message content. Later phases will enable automatic processing of a wider variety of messages. By dramatically reducing the effort associated with manual processing, EZ Reader will pay its own development costs within six months and will result in substantial, recurring dollar savings each year. This paper describes EZ Reader in detail, including its Al-based design, testing, implementation and development history.

  17. SU-E-T-362: Automatic Catheter Reconstruction of Flap Applicators in HDR Surface Brachytherapy

    SciTech Connect (OSTI)

    Buzurovic, I; Devlin, P; Hansen, J; O'Farrell, D; Bhagwat, M; Friesen, S; Damato, A; Lewis, J; Cormack, R

    2014-06-01

    Purpose: Catheter reconstruction is crucial for the accurate delivery of radiation dose in HDR brachytherapy. The process becomes complicated and time-consuming for large superficial clinical targets with a complex topology. A novel method for the automatic catheter reconstruction of flap applicators is proposed in this study. Methods: We have developed a program package capable of image manipulation, using C++class libraries of The-Visualization-Toolkit(VTK) software system. The workflow for automatic catheter reconstruction is: a)an anchor point is placed in 3D or in the axial view of the first slice at the tip of the first, last and middle points for the curved surface; b)similar points are placed on the last slice of the image set; c)the surface detection algorithm automatically registers the points to the images and applies the surface reconstruction filter; d)then a structured grid surface is generated through the center of the treatment catheters placed at a distance of 5mm from the patient's skin. As a result, a mesh-style plane is generated with the reconstructed catheters placed 10mm apart. To demonstrate automatic catheter reconstruction, we used CT images of patients diagnosed with cutaneous T-cell-lymphoma and imaged with Freiburg-Flap-Applicators (Nucletron-Elekta, Netherlands). The coordinates for each catheter were generated and compared to the control points selected during the manual reconstruction for 16catheters and 368control point Results: The variation of the catheter tip positions between the automatically and manually reconstructed catheters was 0.17mm(SD=0.23mm). The position difference between the manually selected catheter control points and the corresponding points obtained automatically was 0.17mm in the x-direction (SD=0.23mm), 0.13mm in the y-direction (SD=0.22mm), and 0.14mm in the z-direction (SD=0.24mm). Conclusion: This study shows the feasibility of the automatic catheter reconstruction of flap applicators with a high level of

  18. Hydrogen cluster/network in tobermorite as studied by multiple-quantum spin counting {sup 1}H NMR

    SciTech Connect (OSTI)

    Mogami, Yuuki; Yamazaki, Satoru; Matsuno, Shinya; Matsui, Kunio; Noda, Yasuto; Takegoshi, K.

    2014-12-15

    Proton multiple-quantum (MQ) spin-counting experiment has been employed to study arrangement of hydrogen atoms in 9 Å/11 Å natural/synthetic tobermorites. Even though all tobermorite samples give similar characterless, broad static-powder {sup 1}H NMR spectra, their MQ spin-counting spectra are markedly different; higher quanta in 11 Å tobermorite do not grow with the MQ excitation time, while those in 9 Å one do. A statistical analysis of the MQ results recently proposed [26] is applied to show that hydrogens align in 9 Å tobermorite one dimensionally, while in 11 Å tobermorite they exist as a cluster of 5–8 hydrogen atoms.

  19. 500-MHz x-ray counting with a Si-APD and a fast-pulse processing system

    SciTech Connect (OSTI)

    Kishimoto, Shunji; Taniguchi, Takashi; Tanaka, Manobu

    2010-06-23

    We introduce a counting system of up to 500 MHz for synchrotron x-ray high-rate measurements. A silicon avalanche photodiode detector was used in the counting system. The fast-pulse circuit of the amplifier was designed with hybrid ICs to prepare an ASIC system for a large-scale pixel array detector in near future. The fast amplifier consists of two cascading emitter-followers using 10-GHz band transistors. A count-rate of 3.25x10{sup 8} s{sup -1} was then achieved using the system for 8-keV x-rays. However, a baseline shift by adopting AC-coupling in the amplifier disturbed us to observe the maximum count of 4.49x10{sup 8} s{sup -1}, determined by electron-bunch filling into a ring accelerator. We also report that an amplifier with a baseline restorer was tested in order to keep the baseline level to be 0 V even at high input rates.

  20. Degree of polarization and source counts of faint radio sources from Stacking Polarized intensity

    SciTech Connect (OSTI)

    Stil, J. M.; George, S. J.; Keller, B. W.; Taylor, A. R.

    2014-06-01

    We present stacking polarized intensity as a means to study the polarization of sources that are too faint to be detected individually in surveys of polarized radio sources. Stacking offers not only high sensitivity to the median signal of a class of radio sources, but also avoids a detection threshold in polarized intensity, and therefore an arbitrary exclusion of sources with a low percentage of polarization. Correction for polarization bias is done through a Monte Carlo analysis and tested on a simulated survey. We show that the nonlinear relation between the real polarized signal and the detected signal requires knowledge of the shape of the distribution of fractional polarization, which we constrain using the ratio of the upper quartile to the lower quartile of the distribution of stacked polarized intensities. Stacking polarized intensity for NRAO VLA Sky Survey (NVSS) sources down to the detection limit in Stokes I, we find a gradual increase in median fractional polarization that is consistent with a trend that was noticed before for bright NVSS sources, but is much more gradual than found by previous deep surveys of radio polarization. Consequently, the polarized radio source counts derived from our stacking experiment predict fewer polarized radio sources for future surveys with the Square Kilometre Array and its pathfinders.

  1. Risk communication with Fukushima residents affected by the Fukushima Daiichi accident at whole-body counting

    SciTech Connect (OSTI)

    Gunji, I.; Furuno, A.; Yonezawa, R.; Sugiyama, K.

    2013-07-01

    After the Tokyo Electric Power Company (TEPCO) Fukushima Daiichi nuclear power plant accident, the Tokai Research and Development Center of the Japan Atomic Energy Agency (JAEA) have had direct dialogue as risk communication with Fukushima residents who underwent whole-body counting examination (WBC). The purpose of the risk communication was to exchange information and opinions about radiation in order to mitigate Fukushima residents' anxiety and stress. Two kinds of opinion surveys were performed: one survey evaluated residents' views of the nuclear accident itself and the second survey evaluated the management of WBC examination as well as the quality of JAEA's communication skills on risks. It appears that most Fukushima residents seem to have reduced their anxiety level after the direct dialogue. The results of the surveys show that Fukushima residents have the deepest anxiety and concern about their long-term health issues and that they harbor anger toward the government and TEPCO. On the other hand, many WBC patients and patients' relatives have expressed gratitude for help in reducing their feelings of anxiety.

  2. On the single-photon-counting (SPC) modes of imaging using an XFEL source

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Wang, Zhehui

    2015-12-14

    In this study, the requirements to achieve high detection efficiency (above 50%) and gigahertz (GHz) frame rate for the proposed 42-keV X-ray free-electron laser (XFEL) at Los Alamos are summarized. Direct detection scenarios using C (diamond), Si, Ge and GaAs semiconductor sensors are analyzed. Single-photon counting (SPC) mode and weak SPC mode using Si can potentially meet the efficiency and frame rate requirements and be useful to both photoelectric absorption and Compton physics as the photon energy increases. Multilayer three-dimensional (3D) detector architecture, as a possible means to realize SPC modes, is compared with the widely used two-dimensional (2D) hybridmore » planar electrode structure and 3D deeply entrenched electrode architecture. Demonstration of thin film cameras less than 100-μm thick with onboard thin ASICs could be an initial step to realize multilayer 3D detectors and SPC modes for XFELs.« less

  3. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    SciTech Connect (OSTI)

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea; Koehler, Katrina Elizabeth; Henzl, Vladimir; Henzlova, Daniela; Parker, Robert Francis; Croft, Stephen

    2015-12-01

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects in all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.

  4. Automatic assembly planning and its role in agile manufacturing: A Sandia perspective

    SciTech Connect (OSTI)

    Jones, R.E.; Kaufman, S.G.

    1993-11-01

    Sandia has been studying automatic assembly planning of electromechanical devices for some years, based on an implemented system called Archimedes. Work done to date has focussed on automatic generation of high-level plans, and translation of these plans into robotic control code and workcell layout. More recently, the importance of an assembly planning capability as a design aid has been emphasized, as it could potentially provide early feedback to a designer on the manufacturability of the design. This paper describes the work done on assembly planning to date, plans for extending it, and its applications to agile manufacturing. In particular, we describe an agile manufacturing demonstration project underway at Sandia, and the role the Archimedes assembly planning system will play in it.

  5. Combining automatic titration of total iron and sulfur in thermal battery materials

    SciTech Connect (OSTI)

    Marley, N.A.

    1986-05-28

    Optimal thermal battery performance requires careful control of the iron disulfide content in the catholyte mixture. Previously, the iron and sulfur content of battery materials was determined separately, each requiring a lengthy sample preparation and clean up procedure. A new method has been developed which allows both determinations to be made on the same sample following a simple dissolution procedure. Sample preparation requires oxidation and dissolution with nitric acid followed by dissolution in hydrochloric acid. Iron and sulfur are then determined on sample aliquots by automatic titration. The implementation of this combined procedure for the determination of iron and sulfur by automatic titration has resulted in a substantial reduction in the analysis time. Since sample aliquots are used for each determination, the need to repeat a sample for analysis is rare, improving both the analytical efficiency and sample throughput. Results obtained for sulfur show an improved precision.

  6. Nanofiber Image Processing Tool for Automatic Measurements and Non-Woven

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Fiber Recognition Software | Argonne National Laboratory Nanofiber Image Processing Tool for Automatic Measurements and Non-Woven Fiber Recognition Software Technology available for licensing: Argonne-developed fiber recognition and analysis software uses a machine vision approach and looks at the fibers as a human would when doing its analysis. The software was developed specifically to analyze images with challenging but common and realistic issues with high reliability and consistency.

  7. Development of automatic operation system for coke oven machines at Yawata Works of Nippon Steel Corporation

    SciTech Connect (OSTI)

    Matsunaga, Masao; Uematsu, Hiroshi; Nakagawa, Yoji; Ishiharaguchi, Yuji

    1995-12-01

    The coke plant is a working environment involving heavy dust emissions, high heat and demanding physical labor. The labor-saving operation of the coke plant is an essential issue from the standpoints of not only improvement in working environment, but also reduction in fixed cost by enhancement of labor productivity. Under these circumstances, Nippon Steel has implemented the automation of coke oven machines. The first automatic operation system for coke oven machinery entered service at Oita Works in 1992, followed by the second system at the No. 5 coke oven battery of the coke plant at Yawata Works. The Yawata automatic operation system is characterized by the installation of coke oven machinery to push as many as 140 ovens per day within a short cycle time, such as a preliminary ascension pipe cap opening car and cycle time simulator by the manned operation of the pusher, which is advantageous from the standpoint of investment efficiency, and by the monitoring of other oven machines by the pusher. These measures helped to reduce the manpower requirement to 2 persons per shift from 4 persons per shift. The system entered commercial operation in March, 1994 and has been smoothly working with an average total automatic rate of 97%. Results from the startup to recent operation of the system are reported below.

  8. SEDS: THE SPITZER EXTENDED DEEP SURVEY. SURVEY DESIGN, PHOTOMETRY, AND DEEP IRAC SOURCE COUNTS

    SciTech Connect (OSTI)

    Ashby, M. L. N.; Willner, S. P.; Fazio, G. G.; Huang, J.-S.; Hernquist, L.; Hora, J. L.; Arendt, R.; Barmby, P.; Barro, G.; Faber, S.; Guhathakurta, P.; Bouwens, R.; Cattaneo, A.; Croton, D.; Dave, R.; Dunlop, J. S.; Egami, E.; Finlator, K.; Grogin, N. A.; and others

    2013-05-20

    The Spitzer Extended Deep Survey (SEDS) is a very deep infrared survey within five well-known extragalactic science fields: the UKIDSS Ultra-Deep Survey, the Extended Chandra Deep Field South, COSMOS, the Hubble Deep Field North, and the Extended Groth Strip. SEDS covers a total area of 1.46 deg{sup 2} to a depth of 26 AB mag (3{sigma}) in both of the warm Infrared Array Camera (IRAC) bands at 3.6 and 4.5 {mu}m. Because of its uniform depth of coverage in so many widely-separated fields, SEDS is subject to roughly 25% smaller errors due to cosmic variance than a single-field survey of the same size. SEDS was designed to detect and characterize galaxies from intermediate to high redshifts (z = 2-7) with a built-in means of assessing the impact of cosmic variance on the individual fields. Because the full SEDS depth was accumulated in at least three separate visits to each field, typically with six-month intervals between visits, SEDS also furnishes an opportunity to assess the infrared variability of faint objects. This paper describes the SEDS survey design, processing, and publicly-available data products. Deep IRAC counts for the more than 300,000 galaxies detected by SEDS are consistent with models based on known galaxy populations. Discrete IRAC sources contribute 5.6 {+-} 1.0 and 4.4 {+-} 0.8 nW m{sup -2} sr{sup -1} at 3.6 and 4.5 {mu}m to the diffuse cosmic infrared background (CIB). IRAC sources cannot contribute more than half of the total CIB flux estimated from DIRBE data. Barring an unexpected error in the DIRBE flux estimates, half the CIB flux must therefore come from a diffuse component.

  9. Recent Developments In Fast Neutron Detection And Multiplicity Counting With Verification With Liquid Scintillator

    SciTech Connect (OSTI)

    Nakae, L; Chapline, G; Glenn, A; Kerr, P; Kim, K; Ouedraogo, S; Prasad, M; Sheets, S; Snyderman, N; Verbeke, J; Wurtz, R

    2011-09-30

    For many years at LLNL, we have been developing time-correlated neutron detection techniques and algorithms for applications such as Arms Control, Threat Detection and Nuclear Material Assay. Many of our techniques have been developed specifically for the relatively low efficiency (a few percent) attainable by detector systems limited to man-portability. Historically, we used thermal neutron detectors (mainly {sup 3}He), taking advantage of the high thermal neutron interaction cross-sections. More recently, we have been investigating the use of fast neutron detection with liquid scintillators, inorganic crystals, and in the near future, pulse-shape discriminating plastics which respond over 1000 times faster (nanoseconds versus tens of microseconds) than thermal neutron detectors. Fast neutron detection offers considerable advantages, since the inherent nanosecond production time-scales of spontaneous fission and neutron-induced fission are preserved and measured instead of being lost by thermalization required for thermal neutron detectors. We are now applying fast neutron technology to the safeguards regime in the form of fast portable digital electronics as well as faster and less hazardous scintillator formulations. Faster detector response times and sensitivity to neutron momentum show promise for measuring, differentiating, and assaying samples that have modest to very high count rates, as well as mixed fission sources like Cm and Pu. We report on measured results with our existing liquid scintillator array, and progress on the design of a nuclear material assay system that incorporates fast neutron detection, including the surprising result that fast liquid scintillator detectors become competitive and even surpass the precision of {sup 3}He-based counters measuring correlated pairs in modest (kg) samples of plutonium.

  10. Vehicle Technologies Office 2013 Merit Review: A System for Automatically Maintaining Pressure in a Commercial Truck Tire

    Broader source: Energy.gov [DOE]

    A presentation given by PPG during the 2013 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting on a system for automatically maintaining tire pressure in commercial truck tires.

  11. General collaboration offer of Johnson Controls regarding the performance of air conditioning automatic control systems and other buildings` automatic control systems

    SciTech Connect (OSTI)

    Gniazdowski, J.

    1995-12-31

    JOHNSON CONTROLS manufactures measuring and control equipment (800 types) and is as well a {open_quotes}turn-key{close_quotes} supplier of complete automatic controls systems for heating, air conditioning, ventilation and refrigerating engineering branches. The Company also supplies Buildings` Computer-Based Supervision and Monitoring Systems that may be applied in both small and large structures. Since 1990 the company has been performing full-range trade and contracting activities on the Polish market. We have our own well-trained technical staff and we collaborate with a series of designing and contracting enterprises that enable us to have our projects carried out all over Poland. The prices of our supplies and services correspond with the level of the Polish market.

  12. Market Assessment of Refinery Outages Planned for October 2010...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    average values for 2002-2009 excluding months in 2005, 2006, and 2008 affected by hurricanes & refinery closures. Similarly, typical historical values are average planned...

  13. May 2016 Planned Outages Archive | The Ames Laboratory

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    May 2016 National Idling Reduction Network News May 2016 National Idling Reduction Network News July 13, 2016 - 10:07am Addthis The National Idling Reduction Network brings together trucking and transit companies; railroads; ports; equipment manufacturers; Federal, state, and local government agencies (including regulators); nonprofit organizations; and national research laboratories to identify consistent, workable solutions to heavy-vehicle idling for the entire United States. Below is the May

  14. Market Assessment of Refinery Outages Planned for March 2011...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    fluctuating between 70 and 85 per barrel, but by the beginning of 2011, Brent crude oil was at 95 per barrel. Recent instability in the Middle East and North Africa added...

  15. HPSS Outage Tue Mar 19 - Fri Mar 22

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of the HPSS client software tools hsi and htar. Note that if you use a "regular" ftp client, your current client will work with the new version of the HPSS server software....

  16. Market Assessment of Refinery Outages Planned for March 2011...

    Gasoline and Diesel Fuel Update (EIA)

    n s u m p tio n 20.68 19.50 18.77 19.13 19.28 0.8% Note: Gasoline consumption includes ethanol. Source: 2007-2010, EIA Petroleum Supply Monthly; 2011, February 2011 Short Term...

  17. Power Outages Update: Post-Tropical Cyclone Sandy

    Office of Energy Efficiency and Renewable Energy (EERE)

    Hurricane Sandy has landed and the Energy Department is working closely to support state and local officials who are responsible for working with utilities.

  18. Survey of Tools for Risk Assessment of Cascading Outages

    SciTech Connect (OSTI)

    Papic, Milorad; Bell, Keith; Chen, Yousu; Dobson, Ian; Fonte, Louis; Haq, Enamul; Hines, Paul; Kirschen, Daniel; Luo, Xiaochuan; Miller, Stephen; Samaan, Nader A.; Vaiman, Marianna; Varghese, Matthew; Zhang, Pei

    2011-10-17

    Cascading failure can cause large blackouts, and a variety of methods are emerging to study this challenging topic. In parts 1 and 2 of this paper, the IEEE task force on cascading failure seeks to consolidate and review the progress of the field towards methods and tools of assessing the risk of cascading failure. Part 2 summarizes and discusses the state of the art in the available cascading failure modeling tools. The discussion integrates industry and research perspectives from a variety of institutions. Strengths, weaknesses, and gaps in current approaches are indicated.

  19. Market Assessment of Refinery Outages Planned for October 2010...

    Gasoline and Diesel Fuel Update (EIA)

    2011 November 2010 Energy Information Administration Office of Petroleum, Gas, and Biofuels Analysis U.S. Department of Energy Washington, DC 20585 This report was prepared by...

  20. Microsoft Word - 112706 Final Outage Letter PUBLIC.doc

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    CRITICAL ENERGY INFRASTRUCTURE INFORMATION REMOVED FOR PRIVILEGED TREATMENT November 27, 2006 Lawrence Mansueti Office of Electricity Delivery and Energy Reliability U.S. Department of Energy Rm. 8H-033 1000 Independence Avenue Washington, D.C. 20585 Re: Potomac River Generating Station Department of Energy Case No. EO-05-01 Dear Mr. Mansueti: Potomac Electric Power Company ("Pepco"), on behalf of itself and PJM Interconnection, L.L.C. ("PJM"), is providing you with

  1. Refinery Outages: Description and Potential Impact on Petroleum Product Prices

    Reports and Publications (EIA)

    2007-01-01

    This report responds to a July 13, 2006 request from Chairman Jeff Bingaman of the Senate Committee on Energy and Natural Resources requested that the Energy Information Administration conduct a study of the impact that refinery shutdowns have had on the price of oil and gasoline.

  2. Separation and counting of single molecules through nanofluidics, programmable electrophoresis, and nanoelectrode-gated tunneling and dielectric detection

    DOE Patents [OSTI]

    Lee, James W.; Thundat, Thomas G.

    2006-04-25

    An apparatus for carrying out the separation, detection, and/or counting of single molecules at nanometer scale. Molecular separation is achieved by driving single molecules through a microfluidic or nanofluidic medium using programmable and coordinated electric fields. In various embodiments, the fluidic medium is a strip of hydrophilic material on nonconductive hydrophobic surface, a trough produced by parallel strips of hydrophobic nonconductive material on a hydrophilic base, or a covered passageway produced by parallel strips of hydrophobic nonconductive material on a hydrophilic base together with a nonconductive cover on the parallel strips of hydrophobic nonconductive material. The molecules are detected and counted using nanoelectrode-gated electron tunneling methods, dielectric monitoring, and other methods.

  3. Impact of sensitivity and throughput on optimum selection of a low-background alpha/beta gross counting system

    SciTech Connect (OSTI)

    Seymour, R.; Sergent, F.; Knight, K.; Kyker, B.

    1992-12-31

    Selection of the appropriate low-background counting system is determined by the laboratory`s measurement requirements including the radionuclide activities being measured, required sensitivity, sample volume, sample throughput, operator skill, automation, reporting requirements, budget, reliability, service, and upgrade capability. These requirements are ranked differently by each user. Nevertheless, any selection requires that the sensitivity and sample throughput be evaluated first because these parameters are instrument-specific, cannot be changed after the equipment is purchased and are easily quantified beforehand. Many of the other criteria are also related to sensitivity and affect the choice of instrument. Mathematical expressions, useful in evaluating sensitivity and throughput, are reviewed, extended, and applied to selecting a low-background alpha/beta counting system.

  4. SU-E-I-77: A Noise Reduction Technique for Energy-Resolved Photon-Counting Detectors

    SciTech Connect (OSTI)

    Lam Ng, A; Ding, H; Cho, H; Molloi, S

    2014-06-01

    Purpose: Finding the optimal energy threshold setting for an energy-resolved photon-counting detector has an important impact on the maximization of contrast-to-noise-ratio (CNR). We introduce a noise reduction method to enhance CNR by reducing the noise in each energy bin without altering the average gray levels in the projection and image domains. Methods: We simulated a four bin energy-resolved photon-counting detector based on Si with a 10 mm depth of interaction. TASMIP algorithm was used to simulate a spectrum of 65 kVp with 2.7 mm Al filter. A 13 mm PMMA phantom with hydroxyapatite and iodine at different concentrations (100, 200 and 300 mg/ml for HA, and 2, 4, and 8 mg/ml for Iodine) was used. Projection-based and Image-based energy weighting methods were used to generate weighted images. A reference low noise image was used for noise reduction purposes. A Gaussian-like weighting function which computes the similarity between pixels of interest was calculated from the reference image and implemented on a pixel by pixel basis for the noisy images. Results: CNR improvement compared to different methods (Charge-Integrated, Photon-Counting and Energy-Weighting) and after noise reduction was highly task-dependent. The CNR improvement with respect to the Charge-Integrated CNR for hydroxyapatite and iodine were 1.8 and 1.5, respectively. In each of the energy bins, the noise was reduced by approximately factor of two without altering their respective average gray levels. Conclusion: The proposed noise reduction technique for energy-resolved photon-counting detectors can significantly reduce image noise. This technique can be used as a compliment to the current energy-weighting methods in CNR optimization.

  5. MEASUREMENT OF RADIONUCLIDES USING ION CHROMATOGRAPHY AND FLOW-CELL SCINTILLATION COUNTING WITH PULSE SHAPE DISCRIMINATION

    SciTech Connect (OSTI)

    R. A. Fjeld; T.A. DeVol; J.D. Leyba

    2000-03-30

    Radiological characterization and monitoring is an important component of environmental management activities throughout the Department of Energy complex. Gamma-ray spectroscopy is the technology most often used for the detection of radionuclides. However, radionuclides which cannot easily be detected by gamma-ray spectroscopy, such as pure beta emitters and transuranics, pose special problems because their quantification generally requires labor intensive radiochemical separations procedures that are time consuming and impractical for field applications. This project focused on a technology for measuring transuranics and pure beta emitters relatively quickly and has the potential of being field deployable. The technology combines ion exchange liquid chromatography and on-line alpha/beta pulse shape discriminating scintillation counting to produce simultaneous alpha and beta chromatograms. The basic instrumentation upon which the project was based was purchased in the early 1990's. In its original commercial form, the instrumentation was capable of separating select activation/fission products in ionic forms from relatively pure aqueous samples. We subsequently developed the capability of separating and detecting actinides (thorium, uranium, neptunium, plutonium, americium, and curium) in less than 30 minutes (Reboul, 1993) and realized that the potential time savings over traditional radiochemical methods for isolating some of these radionuclides was significant. However, at that time, the technique had only been used for radionuclide concentrations that were considerably above environmental levels and for aqueous samples of relatively high chemical purity. For the technique to be useful in environmental applications, development work was needed in lowering detection limits; to be useful in applications involving non-aqueous matrices such as soils and sludges or complex aqueous matrices such as those encountered in waste samples, development work was needed in

  6. High quantum efficiency and low dark count rate in multi-layer superconducting nanowire single-photon detectors

    SciTech Connect (OSTI)

    Jafari Salim, A. Eftekharian, A.; Hamed Majedi, A.

    2014-02-07

    In this paper, we theoretically show that a multi-layer superconducting nanowire single-photon detector (SNSPD) is capable of approaching characteristics of an ideal SNSPD in terms of the quantum efficiency, dark count, and band-width. A multi-layer structure improves the performance in two ways. First, the potential barrier for thermally activated vortex crossing, which is the major source of dark counts and the reduction of the critical current in SNSPDs is elevated. In a multi-layer SNSPD, a vortex is made of 2D-pancake vortices that form a stack. It will be shown that the stack of pancake vortices effectively experiences a larger potential barrier compared to a vortex in a single-layer SNSPD. This leads to an increase in the experimental critical current as well as significant decrease in the dark count rate. In consequence, an increase in the quantum efficiency for photons of the same energy or an increase in the sensitivity to photons of lower energy is achieved. Second, a multi-layer structure improves the efficiency of single-photon absorption by increasing the effective optical thickness without compromising the single-photon sensitivity.

  7. Comparison of MCNP6 and experimental results for neutron counts, Rossi-{alpha}, and Feynman-{alpha} distributions

    SciTech Connect (OSTI)

    Talamo, A.; Gohar, Y.; Sadovich, S.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2013-07-01

    MCNP6, the general-purpose Monte Carlo N-Particle code, has the capability to perform time-dependent calculations by tracking the time interval between successive events of the neutron random walk. In fixed-source calculations for a subcritical assembly, the zero time value is assigned at the moment the neutron is emitted by the external neutron source. The PTRAC and F8 cards of MCNP allow to tally the time when a neutron is captured by {sup 3}He(n, p) reactions in the neutron detector. From this information, it is possible to build three different time distributions: neutron counts, Rossi-{alpha}, and Feynman-{alpha}. The neutron counts time distribution represents the number of neutrons captured as a function of time. The Rossi-a distribution represents the number of neutron pairs captured as a function of the time interval between two capture events. The Feynman-a distribution represents the variance-to-mean ratio, minus one, of the neutron counts array as a function of a fixed time interval. The MCNP6 results for these three time distributions have been compared with the experimental data of the YALINA Thermal facility and have been found to be in quite good agreement. (authors)

  8. The Design, Construction, and Initial Characterization of an Ultra-Low-Background Gas-Proportional Counting System

    SciTech Connect (OSTI)

    Seifert, Allen; Aalseth, Craig E.; Day, Anthony R.; Fuller, Erin S.; Hoppe, Eric W.; Keillor, Martin E.; Mace, Emily K.; Overman, Cory T.; Warren, Glen A.

    2013-05-01

    ABSTRACT Over the past several years, the Pacific Northwest National Laboratory (PNNL) has developed an ultra-low background proportional counter (ULBPC) technology. The resulting detector is the product of an effort to produce a low-background, physically robust gas proportional counter for applications like radon emanation measurements, groundwater tritium, and 37Ar. In order to fully take advantage of the inherent low-background properties designed into the ULBPC, a comparably low-background dedicated counting system is required. An ultra-low-background counting system (ULBCS) was recently built in the new shallow underground laboratory at PNNL. With a design depth of 30 meters water-equivalent, the shallow underground laboratory provides approximately 100x fewer fast neutrons and 6x fewer muons than a surface location. The ULBCS itself provides additional shielding in the form of active anti-cosmic veto (via 2-in. thick plastic scintillator paddles) and passive borated poly (1 in.), lead (6 in.), and copper (~3 in.) shielding. This work will provide details on PNNLs new shallow underground laboratory, examine the motivation for the design of the counting system, and provide results from the characterization of the ULBCS, including initial detector background.

  9. 235U Determination using In-Beam Delayed Neutron Counting Technique at the NRU Reactor

    SciTech Connect (OSTI)

    Andrews, M. T.; Bentoumi, G.; Corcoran, E. C.; Dimayuga, I.; Kelly, D. G.; Li, L.; Sur, B.; Rogge, R. B.

    2015-11-17

    This paper describes a collaborative effort that saw the Royal Military College of Canada (RMC)’s delayed neutron and gamma counting apparatus transported to Canadian Nuclear Laboratories (CNL) for use in the neutron beamline at the National Research Universal (NRU) reactor. Samples containing mg quantities of fissile material were re-interrogated, and their delayed neutron emissions measured. This collaboration offers significant advantages to previous delayed neutron research at both CNL and RMC. This paper details the determination of 235U content in enriched uranium via the assay of in-beam delayed neutron magnitudes and temporal behavior. 235U mass was determined with an average absolute error of ± 2.7 %. This error is lower than that obtained at RMCC for the assay of 235U content in aqueous solutions (3.6 %) using delayed neutron counting. Delayed neutron counting has been demonstrated to be a rapid, accurate, and precise method for special nuclear material detection and identification.

  10. The sixth Standing Committee Meeting

    Office of Environmental Management (EM)

    nearly 17 million smart meters that give consumers better information and automatically report outages, smart relays that sense and recover from faults in the substation automatically, automated feeder switches that re-route power around problems, and storage batteries that store excess energy and make it available later to the grid to meet

    over 16.6 million meters that give consumers better information and automatically report outages, smart relays that sense and recover from faults in the

  11. A Practical Approach for Integrating Automatically Designed Fixtures with Automated Assembly Planning

    SciTech Connect (OSTI)

    Calton, Terri L.; Peters, Ralph R.

    1999-07-20

    This paper presents a practical approach for integrating automatically designed fixtures with automated assembly planning. Product assembly problems vary widely; here the focus is on assemblies that are characterized by a single base part to which a number of smaller parts and subassemblies are attached. This method starts with three-dimension at CAD descriptions of an assembly whose assembly tasks require a fixture to hold the base part. It then combines algorithms that automatically design assembly pallets to hold the base part with algorithms that automatically generate assembly sequences. The designed fixtures rigidly constrain and locate the part, obey task constraints, are robust to part shape variations, are easy to load, and are economical to produce. The algorithm is guaranteed to find the global optimum solution that satisfies these and other pragmatic conditions. The assembly planner consists of four main elements: a user interface, a constraint system, a search engine, and an animation module. The planner expresses all constraints at a sequencing level, specifying orders and conditions on part mating operations in a number of ways. Fast replanning enables an interactive plan-view-constrain-replan cycle that aids in constrain discovery and documentation. The combined algorithms guarantee that the fixture will hold the base part without interfering with any of the assembly operations. This paper presents an overview of the planners, the integration approach, and the results of the integrated algorithms applied to several practical manufacturing problems. For these problems initial high-quality fixture designs and assembly sequences are generated in a matter of minutes with global optimum solutions identified in just over an hour.

  12. Halbach array generator/motor having an automatically regulated output voltage and mechanical power output

    DOE Patents [OSTI]

    Post, Richard F.

    2005-02-22

    A motor/generator having its stationary portion, i.e., the stator, positioned concentrically within its rotatable element, i.e., the rotor, along its axis of rotation. The rotor includes a Halbach array. The stator windings are switched or commutated to provide a DC motor/generator much the same as in a conventional DC motor/generator. The voltage and power are automatically regulated by using centrifugal force to change the diameter of the rotor, and thereby vary the radial gap in between the stator and the rotating Halbach array, as a function of the angular velocity of the rotor.

  13. Automatic generation of stop word lists for information retrieval and analysis

    DOE Patents [OSTI]

    Rose, Stuart J

    2013-01-08

    Methods and systems for automatically generating lists of stop words for information retrieval and analysis. Generation of the stop words can include providing a corpus of documents and a plurality of keywords. From the corpus of documents, a term list of all terms is constructed and both a keyword adjacency frequency and a keyword frequency are determined. If a ratio of the keyword adjacency frequency to the keyword frequency for a particular term on the term list is less than a predetermined value, then that term is excluded from the term list. The resulting term list is truncated based on predetermined criteria to form a stop word list.

  14. Progress report on a fully automatic Gas Tungsten Arc Welding (GTAW) system development

    SciTech Connect (OSTI)

    Daumeyer, G.J. III

    1994-12-01

    A plan to develop a fully automatic gas tungsten arc welding (GTAW) system that will utilize a vision-sensing computer (which will provide in-process feedback control) is presently in work. Evaluations of different technological aspects and system design requirements continue. This report summaries major activities in the plan`s successful progress. The technological feasibility of producing the fully automated GTAW system has been proven. The goal of this process development project is to provide a production-ready system within the shortest reasonable time frame.

  15. Method of automatic measurement and focus of an electron beam and apparatus therefor

    DOE Patents [OSTI]

    Giedt, Warren H.; Campiotti, Richard

    1996-01-01

    An electron beam focusing system, including a plural slit-type Faraday beam trap, for measuring the diameter of an electron beam and automatically focusing the beam for welding. Beam size is determined from profiles of the current measured as the beam is swept over at least two narrow slits of the beam trap. An automated procedure changes the focus coil current until the focal point location is just below a workpiece surface. A parabolic equation is fitted to the calculated beam sizes from which optimal focus coil current and optimal beam diameter are determined.

  16. Method of automatic measurement and focus of an electron beam and apparatus therefore

    DOE Patents [OSTI]

    Giedt, W.H.; Campiotti, R.

    1996-01-09

    An electron beam focusing system, including a plural slit-type Faraday beam trap, for measuring the diameter of an electron beam and automatically focusing the beam for welding is disclosed. Beam size is determined from profiles of the current measured as the beam is swept over at least two narrow slits of the beam trap. An automated procedure changes the focus coil current until the focal point location is just below a workpiece surface. A parabolic equation is fitted to the calculated beam sizes from which optimal focus coil current and optimal beam diameter are determined. 12 figs.

  17. Characteristic performance evaluation of a photon counting Si strip detector for low dose spectral breast CT imaging

    SciTech Connect (OSTI)

    Cho, Hyo-Min; Ding, Huanjun; Molloi, Sabee; Barber, William C.; Iwanczyk, Jan S.

    2014-09-15

    Purpose: The possible clinical applications which can be performed using a newly developed detector depend on the detector's characteristic performance in a number of metrics including the dynamic range, resolution, uniformity, and stability. The authors have evaluated a prototype energy resolved fast photon counting x-ray detector based on a silicon (Si) strip sensor used in an edge-on geometry with an application specific integrated circuit to record the number of x-rays and their energies at high flux and fast frame rates. The investigated detector was integrated with a dedicated breast spectral computed tomography (CT) system to make use of the detector's high spatial and energy resolution and low noise performance under conditions suitable for clinical breast imaging. The aim of this article is to investigate the intrinsic characteristics of the detector, in terms of maximum output count rate, spatial and energy resolution, and noise performance of the imaging system. Methods: The maximum output count rate was obtained with a 50 W x-ray tube with a maximum continuous output of 50 kVp at 1.0 mA. A{sup 109}Cd source, with a characteristic x-ray peak at 22 keV from Ag, was used to measure the energy resolution of the detector. The axial plane modulation transfer function (MTF) was measured using a 67 ?m diameter tungsten wire. The two-dimensional (2D) noise power spectrum (NPS) was measured using flat field images and noise equivalent quanta (NEQ) were calculated using the MTF and NPS results. The image quality parameters were studied as a function of various radiation doses and reconstruction filters. The one-dimensional (1D) NPS was used to investigate the effect of electronic noise elimination by varying the minimum energy threshold. Results: A maximum output count rate of 100 million counts per second per square millimeter (cps/mm{sup 2}) has been obtained (1 million cps per 100 100 ?m pixel). The electrical noise floor was less than 4 keV. The energy

  18. Low-noise low-jitter 32-pixels CMOS single-photon avalanche diodes array for single-photon counting from 300 nm to 900 nm

    SciTech Connect (OSTI)

    Scarcella, Carmelo; Tosi, Alberto Villa, Federica; Tisa, Simone; Zappa, Franco

    2013-12-15

    We developed a single-photon counting multichannel detection system, based on a monolithic linear array of 32 CMOS SPADs (Complementary Metal-Oxide-Semiconductor Single-Photon Avalanche Diodes). All channels achieve a timing resolution of 100 ps (full-width at half maximum) and a photon detection efficiency of 50% at 400 nm. Dark count rate is very low even at room temperature, being about 125 counts/s for 50 ?m active area diameter SPADs. Detection performance and microelectronic compactness of this CMOS SPAD array make it the best candidate for ultra-compact time-resolved spectrometers with single-photon sensitivity from 300 nm to 900 nm.

  19. Automatic Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets

    SciTech Connect (OSTI)

    Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul; Li, Yaquin; Garg, Seema; Tobin Jr, Kenneth William; Chaum, Edward

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing. Our algorithm is robust to segmentation uncertainties, does not need ground truth at lesion level, and is very fast, generating a diagnosis on an average of 4.4 seconds per image on an 2.6 GHz platform with an unoptimised Matlab implementation.

  20. Automatic Transformation of MPI Programs to Asynchronous, Graph-Driven Form

    SciTech Connect (OSTI)

    Baden, Scott B; Weare, John H; Bylaska, Eric J

    2013-04-30

    The goals of this project are to develop new, scalable, high-fidelity algorithms for atomic-level simulations and program transformations that automatically restructure existing applications, enabling them to scale forward to Petascale systems and beyond. The techniques enable legacy MPI application code to exploit greater parallelism though increased latency hiding and improved workload assignment. The techniques were successfully demonstrated on high-end scalable systems located at DOE laboratories. Besides the automatic MPI program transformations efforts, the project also developed several new scalable algorithms for ab-initio molecular dynamics, including new massively parallel algorithms for hybrid DFT and new parallel in time algorithms for molecular dynamics and ab-initio molecular dynamics. These algorithms were shown to scale to very large number of cores, and they were designed to work in the latency hiding framework developed in this project. The effectiveness of the developments was enhanced by the direct application to real grand challenge simulation problems covering a wide range of technologically important applications, time scales and accuracies. These included the simulation of the electronic structure of mineral/fluid interfaces, the very accurate simulation of chemical reactions in microsolvated environments, and the simulation of chemical behavior in very large enzyme reactions.

  1. Automatic Generation of Data Types for Classification of Deep Web Sources

    SciTech Connect (OSTI)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automatic generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.

  2. 2011 Status of the Automatic Alignment System for the National Ignition Facility

    SciTech Connect (OSTI)

    Wilhelmsen, K; Awwal, A; Burkhart, S; McGuigan, D; Kamm, V M; Leach, R; Lowe-Webb, R; Wilson, R

    2011-07-19

    Automated alignment for the National Ignition Facility (NIF) is accomplished using a large-scale parallel control system that directs 192 laser beams along the 300-m optical path. The beams are then focused down to a 50-micron spot in the middle of the target chamber. The entire process is completed in less than 50 minutes. The alignment system commands 9,000 stepping motors for highly accurate adjustment of mirrors and other optics. 41 control loops per beamline perform parallel processing services running on a LINUX cluster to analyze high-resolution images of the beams and their references. This paper describes the status the NIF automatic alignment system and the challenges encountered as NIF development has transitioned from building the laser, to becoming a research project supporting a 24 hour, 7 day laser facility. NIF is now a continuously operated system where performance monitoring is increasingly more critical for operation, maintenance, and commissioning tasks. Equipment wear and the effects of high energy neutrons from fusion experiments are issues which alter alignment efficiency and accuracy. New sensors needing automatic alignment assistance are common. System modifications to improve efficiency and accuracy are prevalent. Handling these evolving alignment and maintenance needs while minimizing the impact on NIF experiment schedule is expected to be an on-going challenge for the planned 30 year operational life of NIF.

  3. Microsoft Word - DOE-ID-15-033 Arizona State EC B3-6.doc

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    3 SECTION A. Project Title: Automatic Imagery Data Analysis for Proactive Computer-Based Workflow Management during Nuclear Power Plant Outages - Arizona State University SECTION B. Project Description Arizona State University proposes to test the hypothesis that real-time imagery-based object tracking and spatial analysis, as well as human behavior modeling of outage participants, will significantly improve efficiency of outage control while lowering the rates of accidents and incidents.

  4. A comparative analysis of OTF, NPS, and DQE in energy integrating and photon counting digital x-ray detectors

    SciTech Connect (OSTI)

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2010-12-15

    Purpose: One of the benefits of photon counting (PC) detectors over energy integrating (EI) detectors is the absence of many additive noise sources, such as electronic noise and secondary quantum noise. The purpose of this work is to demonstrate that thresholding voltage gains to detect individual x rays actually generates an unexpected source of white noise in photon counters. Methods: To distinguish the two detector types, their point spread function (PSF) is interpreted differently. The PSF of the energy integrating detector is treated as a weighting function for counting x rays, while the PSF of the photon counting detector is interpreted as a probability. Although this model ignores some subtleties of real imaging systems, such as scatter and the energy-dependent amplification of secondary quanta in indirect-converting detectors, it is useful for demonstrating fundamental differences between the two detector types. From first principles, the optical transfer function (OTF) is calculated as the continuous Fourier transform of the PSF, the noise power spectra (NPS) is determined by the discrete space Fourier transform (DSFT) of the autocovariance of signal intensity, and the detective quantum efficiency (DQE) is found from combined knowledge of the OTF and NPS. To illustrate the calculation of the transfer functions, the PSF is modeled as the convolution of a Gaussian with the product of rect functions. The Gaussian reflects the blurring of the x-ray converter, while the rect functions model the sampling of the detector. Results: The transfer functions are first calculated assuming outside noise sources such as electronic noise and secondary quantum noise are negligible. It is demonstrated that while OTF is the same for two detector types possessing an equivalent PSF, a frequency-independent (i.e., ''white'') difference in their NPS exists such that NPS{sub PC}{>=}NPS{sub EI} and hence DQE{sub PC}{<=}DQE{sub EI}. The necessary and sufficient condition for

  5. A new method of passive counting of nuclear missile warheads -a white paper for the Defense Threat Reduction Agency

    SciTech Connect (OSTI)

    Morris, Christopher; Durham, J. Matthew; Guardincerri, Elena; Bacon, Jeffrey Darnell; Wang, Zhehui; Fellows, Shelby; Poulson, Daniel Cris; Plaud-Ramos, Kenie Omar; Daughton, Tess Marie; Johnson, Olivia Ruth

    2015-07-31

    Cosmic ray muon imaging has been studied for the past several years as a possible technique for nuclear warhead inspection and verification as part of the New Strategic Arms Reduction Treaty between the United States and the Russian Federation. The Los Alamos team has studied two different muon imaging methods for this application, using detectors on two sides and one side of the object of interest. In this report we present results obtained on single sided imaging of configurations aimed at demonstrating the potential of this technique for counting nuclear warheads in place with detectors above the closed hatch of a ballistic missile submarine.

  6. Real-time automatic fiducial marker tracking in low contrast cine-MV images

    SciTech Connect (OSTI)

    Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang; Liou, Shu-Cheng; Nath, Ravinder; Liu Wu

    2013-01-15

    Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle. While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The

  7. Thermostatically controlled portable electric space heater with automatic temperature setback for energy saving

    SciTech Connect (OSTI)

    Shao, S.

    1994-01-11

    An electrically-powered portable space heater of the type having one or more vertically extending fin-tube heating elements disposed within an elongated housing has a selectively adjustable temperature controller responsive to a signal from an upwardly extending thermistor externally pivotally mounted on the rear of the heater housing for movement from a storage position behind the housing to an upraised operative position, thermistor also being used to supply a room temperature signal to an ambient temperature display device on the heater housing. Furthermore, the heater includes a selectively actuatable energy saving feature which, when actuated, automatically reduces by 5 degrees F. after a period of one hour the temperature to which the heater has been pre-set by the operator. 17 figs.

  8. Drift problems in the automatic analysis of gamma-ray spectra using associative memory algorithms

    SciTech Connect (OSTI)

    Olmos, P.; Diaz, J.C.; Perez, J.M.; Aguayo, P. ); Gomez, P.; Rodellar, V. )

    1994-06-01

    Perturbations affecting nuclear radiation spectrometers during their operation frequently spoil the accuracy of automatic analysis methods. One of the problems usually found in practice refers to fluctuations in the spectrum gain and zero, produced by drifts in the detector and nuclear electronics. The pattern acquired in these conditions may be significantly different from that expected with stable instrumentation, thus complicating the identification and quantification of the radionuclides present in it. In this work, the performance of Associative Memory algorithms when dealing with spectra affected by drifts is explored considering a linear energy-calibration function. The formulation of the extended algorithm, constructed to quantify the possible presence of drifts in the spectrometer, is deduced and the results obtained from its application to several practical cases are commented.

  9. Automatic analysis of flow cytometric DNA histograms from irradiated mouse male germ cells

    SciTech Connect (OSTI)

    Lampariello, F.; Mauro, F.; Uccelli, R.; Spano, M.

    1989-01-01

    An automatic procedure for recovering the DNA content distribution of mouse irradiated testis cells from flow cytometric histograms is presented. First, a suitable mathematical model is developed, to represent the pattern of DNA content and fluorescence distribution in the sample. Then a parameter estimation procedure, based on the maximum likelihood approach, is constructed by means of an optimization technique. This procedure has been applied to a set of DNA histograms relative to different doses of 0.4-MeV neutrons and to different time intervals after irradiation. In each case, a good agreement between the measured histograms and the corresponding fits has been obtained. The results indicate that the proposed method for the quantitative analysis of germ cell DNA histograms can be usefully applied to the study of the cytotoxic and mutagenic action of agents of toxicological interest such as ionizing radiations.18 references.

  10. Near-continuous measurement of hydrogen sulfide and carbonyl sulfide by an automatic gas chromatograph

    SciTech Connect (OSTI)

    Lindgren, E.R.; Pershing, D.W.; Kirchgessner, D.A.; Drehmel, D.C.

    1991-01-01

    The article describes an automatic gas chromatograph with a flame photometric detector (GC-FPD) that samples and analyzes hydrogen sulfide (H2S) and carbonyl sulfide (COS) at 30-sec intervals. Temperature programming was used to elute trace amounts of carbon disulfide (CS2) present in each injection from a Supelpak-S column in a single peak at the end of 15 min runs. The system was used to study the high-temperature fuel-rich sulfur capture reactions of H2S and COS with injected calcium oxide (CaO) sorbent, necessitating the near continuous measurement of these gaseous sulfur species. The H2S concentration ranged from 300 to 3000 ppm, and the COS from 30 to 300 ppm. The system was also used to monitor sulfur dioxide (SO2) levels under fuel-lean conditions: results compared very closely with SO2 measurements made simultaneously with continuous ultraviolet (UV) SO2 instrumentation.

  11. An Automatic Impact-based Delamination Detection System for Concrete Bridge Decks

    SciTech Connect (OSTI)

    Zhang, Gang; Harichandran, Ronald S.; Ramuhalli, Pradeep

    2012-01-02

    Delamination of concrete bridge decks is a commonly observed distress in corrosive environments. In traditional acoustic inspection methods, delamination is assessed by the "hollowness" of the sound created by impacting the bridge deck with a hammer or bar or by dragging a chain where the signals are often contaminated by ambient traffic noise and the detection is highly subjective. In the proposed method, a modified version of independent component analysis (ICA) is used to filter the traffic noise. To eliminate subjectivity, Mel-frequency cepstral coefficients (MFCC) are used as features for detection and the delamination is detected by a radial basis function (RBF) neural network. Results from both experimental and field data suggest that the proposed methods id noise robust and has satisfactory performance. The methods can also detect the delamination of repair patches and concrete below the repair patches. The algorithms were incorporated into an automatic impact-bases delamination detection (AIDD) system for field application.

  12. Automatic coke oven heating control system at Burns Harbor for normal and repair operation

    SciTech Connect (OSTI)

    Battle, E.T.; Chen, K.L.

    1997-12-31

    An automatic heating control system for coke oven batteries was developed in 1985 for the Burns Harbor No. 1 battery and reported in the 1989 Ironmaking Conference Proceedings. The original system was designed to maintain a target coke temperature at a given production level under normal operating conditions. Since 1989, enhancements have been made to this control system so that it can also control the battery heating when the battery is under repair. The new control system has improved heating control capability because it adjusts the heat input to the battery in response to anticipated changes in the production schedule. During a recent repair of this 82 oven battery, the pushing schedule changed from 102 ovens/day to 88 ovens/day, then back to 102 ovens/day, then to 107 ovens/day. During this repair, the control system was able to maintain the coke temperature average standard deviation at 44 F, with a maximum 75 F.

  13. A Method for the Automatic Detection of Insect Clutter in Doppler-Radar Returns.

    SciTech Connect (OSTI)

    Luke,E.; Kollias, P.; Johnson, K.

    2006-06-12

    The accurate detection and removal of insect clutter from millimeter wavelength cloud radar (MMCR) returns is of high importance to boundary layer cloud research (e.g., Geerts et al., 2005). When only radar Doppler moments are available, it is difficult to produce a reliable screening of insect clutter from cloud returns because their distributions overlap. Hence, screening of MMCR insect clutter has historically involved a laborious manual process of cross-referencing radar moments against measurements from other collocated instruments, such as lidar. Our study looks beyond traditional radar moments to ask whether analysis of recorded Doppler spectra can serve as the basis for reliable, automatic insect clutter screening. We focus on the MMCR operated by the Department of Energy's (DOE) Atmospheric Radiation Measurement (ARM) program at its Southern Great Plains (SGP) facility in Oklahoma. Here, archiving of full Doppler spectra began in September 2003, and during the warmer months, a pronounced insect presence regularly introduces clutter into boundary layer returns.

  14. Automatic Estimation of the Radiological Inventory for the Dismantling of Nuclear Facilities

    SciTech Connect (OSTI)

    Garcia-Bermejo, R.; Felipe, A.; Gutierrez, S.; Salas, E.; Martin, N.

    2008-01-15

    The estimation of the radiological inventory of Nuclear Facilities to be dismantled is a process that included information related with the physical inventory of all the plant and radiological survey. Estimation of the radiological inventory for all the components and civil structure of the plant could be obtained with mathematical models with statistical approach. A computer application has been developed in order to obtain the radiological inventory in an automatic way. Results: A computer application that is able to estimate the radiological inventory from the radiological measurements or the characterization program has been developed. In this computer applications has been included the statistical functions needed for the estimation of the central tendency and variability, e.g. mean, median, variance, confidence intervals, variance coefficients, etc. This computer application is a necessary tool in order to be able to estimate the radiological inventory of a nuclear facility and it is a powerful tool for decision taken in future sampling surveys.

  15. Challenges and Opportunities in Using Automatic Differentiation with Object-Oriented Toolkits for Scientific Computing

    SciTech Connect (OSTI)

    Hovland, P; Lee, S; McInnes, L; Norris, B; Smith, B

    2001-04-17

    The increased use of object-oriented toolkits in large-scale scientific simulation presents new opportunities and challenges for the use of automatic (or algorithmic) differentiation (AD) techniques, especially in the context of optimization. Because object-oriented toolkits use well-defined interfaces and data structures, there is potential for simplifying the AD process. Furthermore, derivative computation can be improved by exploiting high-level information about numerical and computational abstractions. However, challenges to the successful use of AD with these toolkits also exist. Among the greatest challenges is balancing the desire to limit the scope of the AD process with the desire to minimize the work required of a user. They discuss their experiences in integrating AD with the PETSc, PVODE, and TAO toolkits and the plans for future research and development in this area.

  16. Characterizing energy dependence and count rate performance of a dual scintillator fiber-optic detector for computed tomography

    SciTech Connect (OSTI)

    Hoerner, Matthew R. Stepusin, Elliott J.; Hyer, Daniel E.; Hintenlang, David E.

    2015-03-15

    Purpose: Kilovoltage (kV) x-rays pose a significant challenge for radiation dosimetry. In the kV energy range, even small differences in material composition can result in significant variations in the absorbed energy between soft tissue and the detector. In addition, the use of electronic systems in light detection has demonstrated measurement losses at high photon fluence rates incident to the detector. This study investigated the feasibility of using a novel dual scintillator detector and whether its response to changes in beam energy from scatter and hardening is readily quantified. The detector incorporates a tissue-equivalent plastic scintillator and a gadolinium oxysulfide scintillator, which has a higher sensitivity to scatter x-rays. Methods: The detector was constructed by coupling two scintillators: (1) small cylindrical plastic scintillator, 500 μm in diameter and 2 mm in length, and (2) 100 micron sheet of gadolinium oxysulfide 500 μm in diameter, each to a 2 m long optical fiber, which acts as a light guide to transmit scintillation photons from the sensitive element to a photomultiplier tube. Count rate linearity data were obtained from a wide range of exposure rates delivered from a radiological x-ray tube by adjusting the tube current. The data were fitted to a nonparalyzable dead time model to characterize the time response. The true counting rate was related to the reference free air dose air rate measured with a 0.6 cm{sup 3} Radcal{sup ®} thimble chamber as described in AAPM Report No. 111. Secondary electron and photon spectra were evaluated using Monte Carlo techniques to analyze ionization quenching and photon energy-absorption characteristics from free-in-air and in phantom measurements. The depth/energy dependence of the detector was characterized using a computed tomography dose index QA phantom consisting of nested adult head and body segments. The phantom provided up to 32 cm of acrylic with a compatible 0.6 cm{sup 3} calibrated

  17. Faint submillimeter galaxies revealed by multifield deep ALMA observations: number counts, spatial clustering, and a dark submillimeter line emitter

    SciTech Connect (OSTI)

    Ono, Yoshiaki; Ouchi, Masami; Momose, Rieko; Kurono, Yasutaka

    2014-11-01

    We present the statistics of faint submillimeter/millimeter galaxies (SMGs) and serendipitous detections of a submillimeter/millimeter line emitter (SLE) with no multi-wavelength continuum counterpart revealed by the deep ALMA observations. We identify faint SMGs with flux densities of 0.1-1.0 mJy in the deep Band-6 and Band-7 maps of 10 independent fields that reduce cosmic variance effects. The differential number counts at 1.2 mm are found to increase with decreasing flux density down to 0.1 mJy. Our number counts indicate that the faint (0.1-1.0 mJy, or SFR{sub IR} ? 30-300 M {sub ?} yr{sup 1}) SMGs contribute nearly a half of the extragalactic background light (EBL), while the remaining half of the EBL is mostly contributed by very faint sources with flux densities of <0.1 mJy (SFR{sub IR} ? 30 M {sub ?} yr{sup 1}). We conduct counts-in-cells analysis with multifield ALMA data for the faint SMGs, and obtain a coarse estimate of galaxy bias, b {sub g} < 4. The galaxy bias suggests that the dark halo masses of the faint SMGs are ? 7 10{sup 12} M {sub ?}, which is smaller than those of bright (>1 mJy) SMGs, but consistent with abundant high-z star-forming populations, such as sBzKs, LBGs, and LAEs. Finally, we report the serendipitous detection of SLE-1, which has no continuum counterparts in our 1.2 mm-band or multi-wavelength images, including ultra deep HST/WFC3 and Spitzer data. The SLE has a significant line at 249.9 GHz with a signal-to-noise ratio of 7.1. If the SLE is not a spurious source made by the unknown systematic noise of ALMA, the strong upper limits of our multi-wavelength data suggest that the SLE would be a faint galaxy at z ? 6.

  18. A cascaded model of spectral distortions due to spectral response effects and pulse pileup effects in a photon-counting x-ray detector for CT

    SciTech Connect (OSTI)

    Cammin, Jochen E-mail: ktaguchi@jhmi.edu; Taguchi, Katsuyuki E-mail: ktaguchi@jhmi.edu; Xu, Jennifer; Barber, William C.; Iwanczyk, Jan S.; Hartsough, Neal E.

    2014-04-15

    Purpose: Energy discriminating, photon-counting detectors (PCDs) are an emerging technology for computed tomography (CT) with various potential benefits for clinical CT. The photon energies measured by PCDs can be distorted due to the interactions of a photon with the detector and the interaction of multiple coincident photons. These effects result in distorted recorded x-ray spectra which may lead to artifacts in reconstructed CT images and inaccuracies in tissue identification. Model-based compensation techniques have the potential to account for the distortion effects. This approach requires only a small number of parameters and is applicable to a wide range of spectra and count rates, but it needs an accurate model of the spectral distortions occurring in PCDs. The purpose of this study was to develop a model of those spectral distortions and to evaluate the model using a PCD (model DXMCT-1; DxRay, Inc., Northridge, CA) and various x-ray spectra in a wide range of count rates. Methods: The authors hypothesize that the complex phenomena of spectral distortions can be modeled by: (1) separating them into count-rate independent factors that we call the spectral response effects (SRE), and count-rate dependent factors that we call the pulse pileup effects (PPE), (2) developing separate models for SRE and PPE, and (3) cascading the SRE and PPE models into a combined SRE+PPE model that describes PCD distortions at both low and high count rates. The SRE model describes the probability distribution of the recorded spectrum, with a photo peak and a continuum tail, given the incident photon energy. Model parameters were obtained from calibration measurements with three radioisotopes and then interpolated linearly for other energies. The PPE model used was developed in the authors previous work [K. Taguchi et al., Modeling the performance of a photon counting x-ray detector for CT: Energy response and pulse pileup effects, Med. Phys. 38(2), 10891102 (2011)]. The

  19. Centroid Position as a Function of Total Counts in a Windowed CMOS Image of a Point Source

    SciTech Connect (OSTI)

    Wurtz, R E; Olivier, S; Riot, V; Hanold, B J; Figer, D F

    2010-05-27

    We obtained 960,200 22-by-22-pixel windowed images of a pinhole spot using the Teledyne H2RG CMOS detector with un-cooled SIDECAR readout. We performed an analysis to determine the precision we might expect in the position error signals to a telescope's guider system. We find that, under non-optimized operating conditions, the error in the computed centroid is strongly dependent on the total counts in the point image only below a certain threshold, approximately 50,000 photo-electrons. The LSST guider camera specification currently requires a 0.04 arcsecond error at 10 Hertz. Given the performance measured here, this specification can be delivered with a single star at 14th to 18th magnitude, depending on the passband.

  20. Free-running InGaAs single photon detector with 1 dark count per second at 10% efficiency

    SciTech Connect (OSTI)

    Korzh, B. Walenta, N.; Lunghi, T.; Gisin, N.; Zbinden, H.

    2014-02-24

    We present a free-running single photon detector for telecom wavelengths based on a negative feedback avalanche photodiode (NFAD). A dark count rate as low as 1?cps was obtained at a detection efficiency of 10%, with an afterpulse probability of 2.2% for 20??s of deadtime. This was achieved by using an active hold-off circuit and cooling the NFAD with a free-piston stirling cooler down to temperatures of ?110?C. We integrated two detectors into a practical, 625?MHz clocked quantum key distribution system. Stable, real-time key distribution in the presence of 30?dB channel loss was possible, yielding a secret key rate of 350?bps.

  1. Strategies for automatic online treatment plan reoptimization using clinical treatment planning system: A planning parameters study

    SciTech Connect (OSTI)

    Li, Taoran; Wu, Qiuwen; Zhang, You; Vergalasova, Irina; Lee, W. Robert; Yin, Fang-Fang; Wu, Q. Jackie

    2013-11-15

    Purpose: Adaptive radiation therapy for prostate cancer using online reoptimization provides an improved control of interfractional anatomy variations. However, the clinical implementation of online reoptimization is currently limited by the low efficiency of current strategies and the difficulties associated with integration into the current treatment planning system. This study investigates the strategies for performing fast (?2 min) automatic online reoptimization with a clinical fluence-map-based treatment planning system; and explores the performance with different input parameters settings: dose-volume histogram (DVH) objective settings, starting stage, and iteration number (in the context of real time planning).Methods: Simulated treatments of 10 patients were reoptimized daily for the first week of treatment (5 fractions) using 12 different combinations of optimization strategies. Options for objective settings included guideline-based RTOG objectives, patient-specific objectives based on anatomy on the planning CT, and daily-CBCT anatomy-based objectives adapted from planning CT objectives. Options for starting stages involved starting reoptimization with and without the original plan's fluence map. Options for iteration numbers were 50 and 100. The adapted plans were then analyzed by statistical modeling, and compared both in terms of dosimetry and delivery efficiency.Results: All online reoptimized plans were finished within ?2 min with excellent coverage and conformity to the daily target. The three input parameters, i.e., DVH objectives, starting stage, and iteration number, contributed to the outcome of optimization nearly independently. Patient-specific objectives generally provided better OAR sparing compared to guideline-based objectives. The benefit in high-dose sparing from incorporating daily anatomy into objective settings was positively correlated with the relative change in OAR volumes from planning CT to daily CBCT. The use of the original

  2. Business Owners: Respond to an Energy Emergency | Department...

    Broader source: Energy.gov (indexed) [DOE]

    Decide whether to activate backup power-If your backup generator doesn't automatically turn on during a power outage, you'll have to determine when to activate backup systems. ...

  3. CX-013828: Categorical Exclusion Determination

    Office of Energy Efficiency and Renewable Energy (EERE)

    Automatic Imagery Data Analysis for Proactive Computer- Based Workflow Management during Nuclear Power Plant Outages - Arizona State University CX(s) Applied: B3.6Date: 06/17/2015 Location(s): IdahoOffices(s): Nuclear Energy

  4. Section 36

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Since the MFRSR is called twice a day and after a power failure is restarted automatically at this time, the power outages result only in some loss of data during the warm period ...

  5. Evaluation of two-stage system for neutron measurement aiming at increase in count rate at Japan Atomic Energy Agency-Fusion Neutronics Source

    SciTech Connect (OSTI)

    Shinohara, K. Ochiai, K.; Sukegawa, A.; Ishii, K.; Kitajima, S.; Baba, M.; Sasao, M.

    2014-11-15

    In order to increase the count rate capability of a neutron detection system as a whole, we propose a multi-stage neutron detection system. Experiments to test the effectiveness of this concept were carried out on Fusion Neutronics Source. Comparing four configurations of alignment, it was found that the influence of an anterior stage on a posterior stage was negligible for the pulse height distribution. The two-stage system using 25 mm thickness scintillator was about 1.65 times the count rate capability of a single detector system for d-D neutrons and was about 1.8 times the count rate capability for d-T neutrons. The results suggested that the concept of a multi-stage detection system will work in practice.

  6. Performance analysis of distributed applications using automatic classification of communication inefficiencies

    DOE Patents [OSTI]

    Vetter, Jeffrey S.

    2005-02-01

    The method and system described herein presents a technique for performance analysis that helps users understand the communication behavior of their message passing applications. The method and system described herein may automatically classifies individual communication operations and reveal the cause of communication inefficiencies in the application. This classification allows the developer to quickly focus on the culprits of truly inefficient behavior, rather than manually foraging through massive amounts of performance data. Specifically, the method and system described herein trace the message operations of Message Passing Interface (MPI) applications and then classify each individual communication event using a supervised learning technique: decision tree classification. The decision tree may be trained using microbenchmarks that demonstrate both efficient and inefficient communication. Since the method and system described herein adapt to the target system's configuration through these microbenchmarks, they simultaneously automate the performance analysis process and improve classification accuracy. The method and system described herein may improve the accuracy of performance analysis and dramatically reduce the amount of data that users must encounter.

  7. The Fortran-P Translator: Towards Automatic Translation of Fortran 77 Programs for Massively Parallel Processors

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    O'keefe, Matthew; Parr, Terence; Edgar, B. Kevin; Anderson, Steve; Woodward, Paul; Dietz, Hank

    1995-01-01

    Massively parallel processors (MPPs) hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. Wemore » have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.« less

  8. Applications of automatic mesh generation and adaptive methods in computational medicine

    SciTech Connect (OSTI)

    Schmidt, J.A.; Macleod, R.S.; Johnson, C.R.; Eason, J.C.

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  9. Sideband Algorithm for Automatic Wind Turbine Gearbox Fault Detection and Diagnosis: Preprint

    SciTech Connect (OSTI)

    Zappala, D.; Tavner, P.; Crabtree, C.; Sheng, S.

    2013-01-01

    Improving the availability of wind turbines (WT) is critical to minimize the cost of wind energy, especially for offshore installations. As gearbox downtime has a significant impact on WT availabilities, the development of reliable and cost-effective gearbox condition monitoring systems (CMS) is of great concern to the wind industry. Timely detection and diagnosis of developing gear defects within a gearbox is an essential part of minimizing unplanned downtime of wind turbines. Monitoring signals from WT gearboxes are highly non-stationary as turbine load and speed vary continuously with time. Time-consuming and costly manual handling of large amounts of monitoring data represent one of the main limitations of most current CMSs, so automated algorithms are required. This paper presents a fault detection algorithm for incorporation into a commercial CMS for automatic gear fault detection and diagnosis. The algorithm allowed the assessment of gear fault severity by tracking progressive tooth gear damage during variable speed and load operating conditions of the test rig. Results show that the proposed technique proves efficient and reliable for detecting gear damage. Once implemented into WT CMSs, this algorithm can automate data interpretation reducing the quantity of information that WT operators must handle.

  10. Using stochastic activity networks to study the energy feasibility of automatic weather stations

    SciTech Connect (OSTI)

    Cassano, Luca; Cesarini, Daniel; Avvenuti, Marco

    2015-03-10

    Automatic Weather Stations (AWSs) are systems equipped with a number of environmental sensors and communication interfaces used to monitor harsh environments, such as glaciers and deserts. Designing such systems is challenging, since designers have to maximize the amount of sampled and transmitted data while considering the energy needs of the system that, in most cases, is powered by rechargeable batteries and exploits energy harvesting, e.g., solar cells and wind turbines. To support designers of AWSs in the definition of the software tasks and of the hardware configuration of the AWS we designed and implemented an energy-aware simulator of such systems. The simulator relies on the Stochastic Activity Networks (SANs) formalism and has been developed using the Mbius tool. In this paper we first show how we used the SAN formalism to model the various components of an AWS, we then report results from an experiment carried out to validate the simulator against a real-world AWS and we finally show some examples of usage of the proposed simulator.

  11. HERSCHEL-ATLAS GALAXY COUNTS AND HIGH-REDSHIFT LUMINOSITY FUNCTIONS: THE FORMATION OF MASSIVE EARLY-TYPE GALAXIES

    SciTech Connect (OSTI)

    Lapi, A.; Gonzalez-Nuevo, J.; Fan, L.; Bressan, A.; De Zotti, G.; Danese, L.; Negrello, M.; Dunne, L.; Maddox, S.; Eales, S.; Auld, R.; Dariush, A.; Dye, S.; Baes, M.; Fritz, J.; Bonfield, D. G.; Buttiglione, S.; Cava, A.; Clements, D. L.; Cooray, A.

    2011-11-20

    Exploiting the Herschel Astrophysical Terahertz Large Area Survey Science Demonstration Phase survey data, we have determined the luminosity functions (LFs) at rest-frame wavelengths of 100 and 250 {mu}m and at several redshifts z {approx}> 1, for bright submillimeter galaxies with star formation rates (SFRs) {approx}> 100 M{sub Sun} yr{sup -1}. We find that the evolution of the comoving LF is strong up to z Almost-Equal-To 2.5, and slows down at higher redshifts. From the LFs and the information on halo masses inferred from clustering analysis, we derived an average relation between SFR and halo mass (and its scatter). We also infer that the timescale of the main episode of dust-enshrouded star formation in massive halos (M{sub H} {approx}> 3 Multiplication-Sign 10{sup 12} M{sub Sun }) amounts to {approx}7 Multiplication-Sign 10{sup 8} yr. Given the SFRs, which are in the range of 10{sup 2}-10{sup 3} M{sub Sun} yr{sup -1}, this timescale implies final stellar masses of the order of 10{sup 11}-10{sup 12} M{sub Sun }. The corresponding stellar mass function matches the observed mass function of passively evolving galaxies at z {approx}> 1. The comparison of the statistics for submillimeter and UV-selected galaxies suggests that the dust-free, UV bright phase is {approx}> 10{sup 2} times shorter than the submillimeter bright phase, implying that the dust must form soon after the onset of star formation. Using a single reference spectral energy distribution (SED; the one of the z Almost-Equal-To 2.3 galaxy SMM J2135-0102), our simple physical model is able to reproduce not only the LFs at different redshifts >1 but also the counts at wavelengths ranging from 250 {mu}m to Almost-Equal-To 1 mm. Owing to the steepness of the counts and their relatively broad frequency range, this result suggests that the dispersion of submillimeter SEDs of z > 1 galaxies around the reference one is rather small.

  12. Automatic meshing of curved three-dimensional domains: Curving finite elements and curvature-based mesh control

    SciTech Connect (OSTI)

    Shephard, M.S.; Dey, S.; Georges, M.K.

    1995-12-31

    Specific issues associated with the automatic generation of finite element meshes for curved geometric domains axe considered. A review of the definition of when a triangulation is a valid mesh, a geometric triangulation, for curved geometric domains is given. Consideration is then given to the additional operations necessary to maintain the validity of a mesh when curved finite elements are employed. A procedure to control the mesh gradations based on the curvature of the geometric model faces is also given.

  13. SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy

    SciTech Connect (OSTI)

    Melchior, M; Salinas Aranda, F; Sciutto, S; Dodat, D; Larragueta, N

    2014-06-01

    Purpose: To automatically validate megavoltage beams modeled in XiO 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanning system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use was

  14. 3D automatic anatomy segmentation based on iterative graph-cut-ASM

    SciTech Connect (OSTI)

    Chen, Xinjian; Bagci, Ulas

    2011-08-15

    Purpose: This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. Methods: The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al.[Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. Results: The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 deg. and 0.03, and over all foot bones are about 3.5709 mm, 0.35 deg. and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and

  15. ELLIPTICAL WEIGHTED HOLICs FOR WEAK LENSING SHEAR MEASUREMENT. III. THE EFFECT OF RANDOM COUNT NOISE ON IMAGE MOMENTS IN WEAK LENSING ANALYSIS

    SciTech Connect (OSTI)

    Okura, Yuki; Futamase, Toshifumi E-mail: tof@astr.tohoku.ac.jp

    2013-07-01

    This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging, but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.

  16. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    SciTech Connect (OSTI)

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2015-11-19

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chain Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.

  17. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2015-11-19

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less

  18. Low-Intrusion Techniques and Sensitive Information Management for Warhead Counting and Verification: FY2011 Annual Report

    SciTech Connect (OSTI)

    Jarman, Kenneth D.; Robinson, Sean M.; McDonald, Benjamin S.; Gilbert, Andrew J.; Misner, Alex C.; Pitts, W. Karl; White, Timothy A.; Seifert, Allen; Miller, Erin A.

    2011-09-01

    Future arms control treaties may push nuclear weapons limits to unprecedented low levels and may entail precise counting of warheads as well as distinguishing between strategic and tactical nuclear weapons. Such advances will require assessment of form and function to confidently verify the presence or absence of nuclear warheads and/or their components. Imaging with penetrating radiation can provide such an assessment and could thus play a unique role in inspection scenarios. Yet many imaging capabilities have been viewed as too intrusive from the perspective of revealing weapon design details, and the potential for the release of sensitive information poses challenges in verification settings. A widely held perception is that verification through radiography requires images of sufficient quality that an expert (e.g., a trained inspector or an image-matching algorithm) can verify the presence or absence of components of a device. The concept of information barriers (IBs) has been established to prevent access to relevant weapon-design information by inspectors (or algorithms), and has, to date, limited the usefulness of radiographic inspection. The challenge of this project is to demonstrate that radiographic information can be used behind an IB to improve the capabilities of treaty-verification weapons-inspection systems.

  19. Full counting statistics as a probe of quantum coherence in a side-coupled double quantum dot system

    SciTech Connect (OSTI)

    Xue, Hai-Bin

    2013-12-15

    We study theoretically the full counting statistics of electron transport through side-coupled double quantum dot (QD) based on an efficient particle-number-resolved master equation. It is demonstrated that the high-order cumulants of transport current are more sensitive to the quantum coherence than the average current, which can be used to probe the quantum coherence of the considered double QD system. Especially, quantum coherence plays a crucial role in determining whether the super-Poissonian noise occurs in the weak inter-dot hopping coupling regime depending on the corresponding QD-lead coupling, and the corresponding values of super-Poissonian noise can be relatively enhanced when considering the spins of conduction electrons. Moreover, this super-Poissonian noise bias range depends on the singly-occupied eigenstates of the system, which thus suggests a tunable super-Poissonian noise device. The occurrence-mechanism of super-Poissonian noise can be understood in terms of the interplay of quantum coherence and effective competition between fast-and-slow transport channels. -- Highlights: The FCS can be used to probe the quantum coherence of side-coupled double QD system. Probing quantum coherence using FCS may permit experimental tests in the near future. The current noise characteristics depend on the quantum coherence of this QD system. The super-Poissonian noise can be enhanced when considering conduction electron spin. The side-coupled double QD system suggests a tunable super-Poissonian noise device.

  20. [An improved, more reliable and more marketable version of the Automatic Metering System

    SciTech Connect (OSTI)

    Patas, J.E.

    1993-07-01

    Texas Research Institute Austin, Inc. (TRI/Austin) was tasked by Letco International to evaluate its Automatic Metering System (AMS), a proportional controller for heat tracing cable. The original objectives were focused primarily on the reliability of the AMS controller. However, from the time of the original TRI/Austin proposal, the AMS device evolved beyond the prototype level into an established market product with sufficient operational experience and data that product reliability evaluation was not a significant test objective. The goals of this effort have been to determine the relative energy usage of the AMS proportional control compared to existing thermostatic control in a realistic freeze protection installation (low temperature test), to perform an accelerated life test for self limiting heat tracing cables to determine the service life impact of AMS control vs. thermostat control, and to perform a reliability analysis of the AMS device according to the 1986 edition of MIL-HDBK-217E [1] specifications. TRI/Austin designed and constructed a test set-up for conducting the low temperature test and the accelerated life test. A conceptual diagram of the test hardware is shown in Figure 1. The control computer was programmed to monitor and collect data from both tests in parallel, using the relay box and control circuitry fabricated at TRI/Austin. Test data and control commands were transmitted to and from the computer via standard parallel and serial interfaces. The AMS controller and relay box switched the power to the test cables, the commercial freezer, and the ALT chamber.

  1. SU-E-I-24: Method for CT Automatic Exposure Control Verification

    SciTech Connect (OSTI)

    Gracia, M; Olasolo, J; Martin, M; Bragado, L; Gallardo, N; Miquelez, S; Maneru, F; Lozares, S; Pellejero, S; Rubio, A

    2015-06-15

    Purpose: Design of a phantom and a simple method for the automatic exposure control (AEC) verification in CT. This verification is included in the computed tomography (CT) Spanish Quality Assurance Protocol. Methods: The phantom design is made from the head and the body phantom used for the CTDI measurement and PMMA plates (35×35 cm2) of 10 cm thickness. Thereby, three different thicknesses along the longitudinal axis are obtained which permit to evaluate the longitudinal AEC performance. Otherwise, the existent asymmetry in the PMMA layers helps to assess angular and 3D AEC operation.Recent acquisition in our hospital (August 2014) of Nomex electrometer (PTW), together with the 10 cm pencil ionization chamber, led to register dose rate as a function of time. Measurements with this chamber fixed at 0° and 90° on the gantry where made on five multidetector-CTs from principal manufacturers. Results: Individual analysis of measurements shows dose rate variation as a function of phantom thickness. The comparative analysis shows that dose rate is kept constant in the head and neck phantom while the PMMA phantom exhibits an abrupt variation between both results, being greater results at 90° as the thickness of the phantom is 3.5 times larger than in the perpendicular direction. Conclusion: Proposed method is simple, quick and reproducible. Results obtained let a qualitative evaluation of the AEC and they are consistent with the expected behavior. A line of future development is to quantitatively study the intensity modulation and parameters of image quality, and a possible comparative study between different manufacturers.

  2. SU-D-BRD-04: The Impact of Automatic Radiation Therapy Plan Checks in Treatment Planning

    SciTech Connect (OSTI)

    Gopan, O; Yang, F; Ford, E

    2015-06-15

    Purpose: The physics plan check verifies various aspects of a treatment plan after dosimetrists have finished creating the plan. Some errors in the plan which are caught by the physics check could be caught earlier in the departmental workflow. The purpose of this project was to evaluate a plan checking script that can be run within the treatment planning system (TPS) by the dosimetrists prior to plan approval and export to the record and verify system. Methods: A script was created in the Pinnacle TPS to automatically check 15 aspects of a plan for clinical practice conformity. The script outputs a list of checks which the plan has passed and a list of checks which the plan has failed so that appropriate adjustments can be made. For this study, the script was run on a total of 108 plans: IMRT (46/108), VMAT (35/108) and SBRT (27/108). Results: Of the plans checked by the script, 77/108 (71%) failed at least one of the fifteen checks. IMRT plans resulted in more failed checks (91%) than VMAT (51%) or SBRT (63%), due to the high failure rate of an IMRT-specific check, which checks that no IMRT segment < 5 MU. The dose grid size and couch removal checks caught errors in 10% and 14% of all plans – errors that ultimately may have resulted in harm to the patient. Conclusion: Approximately three-fourths of the plans being examined contain errors that could be caught by dosimetrists running an automated script embedded in the TPS. The results of this study will improve the departmental workflow by cutting down on the number of plans that, due to these types of errors, necessitate re-planning and re-approval of plans, increase dosimetrist and physician workload and, in urgent cases, inconvenience patients by causing treatment delays.

  3. Event counting alpha detector

    DOE Patents [OSTI]

    Bolton, R.D.; MacArthur, D.W.

    1996-08-27

    An electrostatic detector is disclosed for atmospheric radon or other weak sources of alpha radiation. In one embodiment, nested enclosures are insulated from one another, open at the top, and have a high voltage pin inside and insulated from the inside enclosure. An electric field is produced between the pin and the inside enclosure. Air ions produced by collision with alpha particles inside the decay volume defined by the inside enclosure are attracted to the pin and the inner enclosure. With low alpha concentrations, individual alpha events can be measured to indicate the presence of radon or other alpha radiation. In another embodiment, an electrical field is produced between parallel plates which are insulated from a single decay cavity enclosure. 6 figs.

  4. Event counting alpha detector

    DOE Patents [OSTI]

    Bolton, Richard D.; MacArthur, Duncan W.

    1996-01-01

    An electrostatic detector for atmospheric radon or other weak sources of alpha radiation. In one embodiment, nested enclosures are insulated from one another, open at the top, and have a high voltage pin inside and insulated from the inside enclosure. An electric field is produced between the pin and the inside enclosure. Air ions produced by collision with alpha particles inside the decay volume defined by the inside enclosure are attracted to the pin and the inner enclosure. With low alpha concentrations, individual alpha events can be measured to indicate the presence of radon or other alpha radiation. In another embodiment, an electrical field is produced between parallel plates which are insulated from a single decay cavity enclosure.

  5. Counting paths in digraphs

    SciTech Connect (OSTI)

    Sullivan, Blair D; Seymour, Dr. Paul Douglas

    2010-01-01

    Say a digraph is k-free if it has no directed cycles of length at most k, for k {element_of} Z{sup +}. Thomasse conjectured that the number of induced 3-vertex directed paths in a simple 2-free digraph on n vertices is at most (n-1)n(n+1)/15. We present an unpublished result of Bondy proving there are at most 2n{sup 3}/25 such paths, and prove that for the class of circular interval digraphs, an upper bound of n{sup 3}/16 holds. We also study the problem of bounding the number of (non-induced) 4-vertex paths in 3-free digraphs. We show an upper bound of 4n{sup 4}/75 using Bondy's result for Thomasse's conjecture.

  6. Job Counting Guidelines

    Broader source: Energy.gov [DOE]

    The following updated definitions and guidelines are intended to provide EM Recovery Act sites with information to collect and report timely and accurate full‐time equivalent and cumulative head...

  7. Neutron Coincidence Counting Studies

    SciTech Connect (OSTI)

    Rogers, Jeremy L.; Ely, James H.; Kouzes, Richard T.; Lintereur, Azaree T.; Siciliano, Edward R.

    2012-08-31

    The efficiency comparison for measured and simulated responses of a 10B-lined proportional counter and a 3He proportional counter in a close, symmetrical geometry are presented. The measurement geometry was modeled in MCNPX to validate the methods used for simulating the response of both the 3He and 10B-lined tubes. The MCNPX models agree within 1% with the 3He tube measurements and within 3% for the 10B-lined tubes when a 0.75-µm boron-metal lining is used.

  8. Automatic system for regulating the frequency and power of the 500 MW coal-dust power generating units at the Reftinskaya GRES

    SciTech Connect (OSTI)

    Bilenko, V. A.; Gal'perina, A. I.; Mikushevich, E. E.; Nikol'skii, D. Yu.; Zhugrin, A. G.; Bebenin, P. A.; Syrchin, M. V.

    2009-03-15

    The monitoring and control systems at the 500 MW coal-dust power generating units No. 7, 8, and 9 at the Reftinskaya GRES have been modernized using information-regulator systems. Layouts for instrumental construction of these systems and expanded algorithmic schemes for the automatic frequency and power control system and for the boiler supply and fuelling are discussed. Results from tests and normal operation of the automatic frequency and power control system are presented.

  9. Detection of illicit HEU production in gaseous centrifuge enrichment plants using neutron counting techniques on product cylinders

    SciTech Connect (OSTI)

    Freeman, Corey R; Geist, William H

    2010-01-01

    Innovative and novel safeguards approaches are needed for nuclear energy to meet global energy needs without the threat of nuclear weapons proliferation. Part of these efforts will include creating verification techniques that can monitor uranium enrichment facilities for illicit production of highly-enriched uranium (HEU). Passive nondestructive assay (NDA) techniques will be critical in preventing illicit HEU production because NDA offers the possibility of continuous and unattended monitoring capabilities with limited impact on facility operations. Gaseous centrifuge enrichment plants (GCEP) are commonly used to produce low-enriched uranium (LEU) for reactor fuel. In a GCEP, gaseous UF{sub 6} spins at high velocities in centrifuges to separate the molecules containing {sup 238}U from those containing the lighter {sup 235}U. Unfortunately, the process for creating LEU is inherently the same as HEU, creating a proliferation concern. Insuring that GCEPs are producing declared enrichments poses many difficult challenges. In a GCEP, large cascade halls operating thousands of centrifuges work together to enrich the uranium which makes effective monitoring of the cascade hall economically prohibitive and invasive to plant operations. However, the enriched uranium exiting the cascade hall fills product cylinders where the UF{sub 6} gas sublimes and condenses for easier storage and transportation. These product cylinders hold large quantities of enriched uranium, offering a strong signal for NDA measurement. Neutrons have a large penetrability through materials making their use advantageous compared to gamma techniques where the signal is easily attenuated. One proposed technique for detecting HEU production in a GCEP is using neutron coincidence counting at the product cylinder take off stations. This paper discusses findings from Monte Carlo N-Particle eXtended (MCNPX) code simulations that examine the feasibility of such a detector.

  10. 2014-06-09 Issuance: Energy Conservation Standards for Automatic Commercial Ice Makers; Notice of Public Meeting

    Broader source: Energy.gov [DOE]

    This document is a pre-publication Federal Register Notice of Public Meeting regarding Energy Conservation Standards for Automatic Commercial Ice Maker, as issued by the Deputy Assistant Secretary for Energy Efficiency on June 9, 2014. Though it is not intended or expected, should any discrepancy occur between the document posted here and the document published in the Federal Register, the Federal Register publication controls. This document is being made available through the Internet solely as a means to facilitate the public's access to this document.

  11. Ideal-observer detectability in photon-counting differential phase-contrast imaging using a linear-systems approach

    SciTech Connect (OSTI)

    Fredenberg, Erik; Danielsson, Mats; Stayman, J. Webster; Siewerdsen, Jeffrey H.; Aslund, Magnus

    2012-09-15

    Purpose: To provide a cascaded-systems framework based on the noise-power spectrum (NPS), modulation transfer function (MTF), and noise-equivalent number of quanta (NEQ) for quantitative evaluation of differential phase-contrast imaging (Talbot interferometry) in relation to conventional absorption contrast under equal-dose, equal-geometry, and, to some extent, equal-photon-economy constraints. The focus is a geometry for photon-counting mammography. Methods: Phase-contrast imaging is a promising technology that may emerge as an alternative or adjunct to conventional absorption contrast. In particular, phase contrast may increase the signal-difference-to-noise ratio compared to absorption contrast because the difference in phase shift between soft-tissue structures is often substantially larger than the absorption difference. We have developed a comprehensive cascaded-systems framework to investigate Talbot interferometry, which is a technique for differential phase-contrast imaging. Analytical expressions for the MTF and NPS were derived to calculate the NEQ and a task-specific ideal-observer detectability index under assumptions of linearity and shift invariance. Talbot interferometry was compared to absorption contrast at equal dose, and using either a plane wave or a spherical wave in a conceivable mammography geometry. The impact of source size and spectrum bandwidth was included in the framework, and the trade-off with photon economy was investigated in some detail. Wave-propagation simulations were used to verify the analytical expressions and to generate example images. Results: Talbot interferometry inherently detects the differential of the phase, which led to a maximum in NEQ at high spatial frequencies, whereas the absorption-contrast NEQ decreased monotonically with frequency. Further, phase contrast detects differences in density rather than atomic number, and the optimal imaging energy was found to be a factor of 1.7 higher than for absorption

  12. Nonlinear automatic landing control of unmanned aerial vehicles on moving platforms via a 3D laser radar

    SciTech Connect (OSTI)

    Hervas, Jaime Rubio; Tang, Hui; Reyhanoglu, Mahmut

    2014-12-10

    This paper presents a motion tracking and control system for automatically landing Unmanned Aerial Vehicles (UAVs) on an oscillating platform using Laser Radar (LADAR) observations. The system itself is assumed to be mounted on a ship deck. A full nonlinear mathematical model is first introduced for the UAV. The ship motion is characterized by a Fourier transform based method which includes a realistic characterization of the sea waves. LADAR observation models are introduced and an algorithm to process those observations for yielding the relative state between the vessel and the UAV is presented, from which the UAV's state relative to an inertial frame can be obtained and used for feedback purposes. A sliding mode control algorithm is derived for tracking a landing trajectory defined by a set of desired waypoints. An extended Kalman filter (EKF) is proposed to account for process and observation noises in the design of a state estimator. The effectiveness of the control algorithm is illustrated through a simulation example.

  13. Increasing the reliability of the shutdown of 500 - 750-kV overhead lines equipped with shunt reactors in an unsuccessful three-phase automatic repeated closure cycle

    SciTech Connect (OSTI)

    Kuz'micheva, K. I.; Merzlyakov, A. S.; Fokin, G. G.

    2013-05-15

    The reasons for circuit-breaker failures during repeated disconnection of 500 - 750 kV overhead lines with shunt reactors in a cycle of unsuccessful three-phase automatic reconnection (TARC) are analyzed. Recommendations are made for increasing the operating reliability of power transmission lines with shunt reactors when there is unsuccessful reconnection.

  14. Investigating the limits of PET/CT imaging at very low true count rates and high random fractions in ion-beam therapy monitoring

    SciTech Connect (OSTI)

    Kurz, Christopher Bauer, Julia; Conti, Maurizio; Guérin, Laura; Eriksson, Lars; Parodi, Katia

    2015-07-15

    Purpose: External beam radiotherapy with protons and heavier ions enables a tighter conformation of the applied dose to arbitrarily shaped tumor volumes with respect to photons, but is more sensitive to uncertainties in the radiotherapeutic treatment chain. Consequently, an independent verification of the applied treatment is highly desirable. For this purpose, the irradiation-induced β{sup +}-emitter distribution within the patient is detected shortly after irradiation by a commercial full-ring positron emission tomography/x-ray computed tomography (PET/CT) scanner installed next to the treatment rooms at the Heidelberg Ion-Beam Therapy Center (HIT). A major challenge to this approach is posed by the small number of detected coincidences. This contribution aims at characterizing the performance of the used PET/CT device and identifying the best-performing reconstruction algorithm under the particular statistical conditions of PET-based treatment monitoring. Moreover, this study addresses the impact of radiation background from the intrinsically radioactive lutetium-oxyorthosilicate (LSO)-based detectors at low counts. Methods: The authors have acquired 30 subsequent PET scans of a cylindrical phantom emulating a patientlike activity pattern and spanning the entire patient counting regime in terms of true coincidences and random fractions (RFs). Accuracy and precision of activity quantification, image noise, and geometrical fidelity of the scanner have been investigated for various reconstruction algorithms and settings in order to identify a practical, well-suited reconstruction scheme for PET-based treatment verification. Truncated listmode data have been utilized for separating the effects of small true count numbers and high RFs on the reconstructed images. A corresponding simulation study enabled extending the results to an even wider range of counting statistics and to additionally investigate the impact of scatter coincidences. Eventually, the recommended

  15. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma

    SciTech Connect (OSTI)

    Ciller, Carlos; De Zanet, Sandro I.; Rüegsegger, Michael B.; Pica, Alessia; Sznitman, Raphael; Thiran, Jean-Philippe; Maeder, Philippe; Munier, Francis L.; Kowal, Jens H.; and others

    2015-07-15

    Purpose: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Methods and Materials: Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. Results: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. Conclusion: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.

  16. A Pixel Readout Chip in 40 nm CMOS Process for High Count Rate Imaging Systems with Minimization of Charge Sharing Effects

    SciTech Connect (OSTI)

    Maj, Piotr; Grybos, P.; Szczgiel, R.; Kmon, P.; Drozd, A.; Deptuch, G.

    2013-11-07

    We present a prototype chip in 40 nm CMOS technology for readout of hybrid pixel detector. The prototype chip has a matrix of 18x24 pixels with a pixel pitch of 100 ?m. It can operate both in single photon counting (SPC) mode and in C8P1 mode. In SPC the measured ENC is 84 e? rms (for the peaking time of 48 ns), while the effective offset spread is below 2 mV rms. In the C8P1 mode the chip reconstructs full charge deposited in the detector, even in the case of charge sharing, and it identifies a pixel with the largest charge deposition. The chip architecture and preliminary measurements are reported.

  17. Determination of total Pu content in a Spent Fuel Assembly by Measuring Passive Neutron Count rate and Multiplication with the Differential Die-Away Instrument

    SciTech Connect (OSTI)

    Henzl, Vladimir; Croft, Stephen; Swinhoe, Martyn T.; Tobin, Stephen J.

    2012-07-18

    A key objective of the Next Generation Safeguards Initiative (NGSI) is to evaluate and develop non-destructive assay (NDA) techniques to determine the elemental plutonium content in a commercial-grade nuclear spent fuel assembly (SFA) [1]. Within this framework, we investigate by simulation a novel analytical approach based on combined information from passive measurement of the total neutron count rate of a SFA and its multiplication determined by the active interrogation using an instrument based on a Differential Die-Away technique (DDA). We use detailed MCNPX simulations across an extensive set of SFA characteristics to establish the approach and demonstrate its robustness. It is predicted that Pu content can be determined by the proposed method to a few %.

  18. ACCELERATING FUSION REACTOR NEUTRONICS MODELING BY AUTOMATIC COUPLING OF HYBRID MONTE CARLO/DETERMINISTIC TRANSPORT ON CAD GEOMETRY

    SciTech Connect (OSTI)

    Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E

    2015-01-01

    Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).

  19. Compression Algorithm Analysis of In-Situ (S)TEM Video: Towards Automatic Event Detection and Characterization

    SciTech Connect (OSTI)

    Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.; Browning, Nigel D.

    2015-09-23

    Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the data into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.

  20. Convective Heat Transfer Coefficients of Automatic Transmission Fluid Jets with Implications for Electric Machine Thermal Management: Preprint

    SciTech Connect (OSTI)

    Bennion, Kevin; Moreno, Gilberto

    2015-09-29

    Thermal management for electric machines (motors/ generators) is important as the automotive industry continues to transition to more electrically dominant vehicle propulsion systems. Cooling of the electric machine(s) in some electric vehicle traction drive applications is accomplished by impinging automatic transmission fluid (ATF) jets onto the machine's copper windings. In this study, we provide the results of experiments characterizing the thermal performance of ATF jets on surfaces representative of windings, using Ford's Mercon LV ATF. Experiments were carried out at various ATF temperatures and jet velocities to quantify the influence of these parameters on heat transfer coefficients. Fluid temperatures were varied from 50 degrees C to 90 degrees C to encompass potential operating temperatures within an automotive transaxle environment. The jet nozzle velocities were varied from 0.5 to 10 m/s. The experimental ATF heat transfer coefficient results provided in this report are a useful resource for understanding factors that influence the performance of ATF-based cooling systems for electric machines.

  1. SU-D-BRD-07: Automatic Patient Data Audit and Plan Quality Check to Support ARIA and Eclipse

    SciTech Connect (OSTI)

    Li, X; Li, H; Wu, Y; Mutic, S; Yang, D

    2014-06-01

    Purpose: To ensure patient safety and treatment quality in RT departments that use Varian ARIA and Eclipse, we developed a computer software system and interface functions that allow previously developed electron chart checking (EcCk) methodologies to support these Varian systems. Methods: ARIA and Eclipse store most patient information in its MSSQL database. We studied the contents in the hundreds database tables and identified the data elements used for patient treatment management and treatment planning. Interface functions were developed in both c-sharp and MATLAB to support data access from ARIA and Eclipse servers using SQL queries. These functions and additional data processing functions allowed the existing rules and logics from EcCk to support ARIA and Eclipse. Dose and structure information are important for plan quality check, however they are not stored in the MSSQL database but as files in Varian private formats, and cannot be processed by external programs. We have therefore implemented a service program, which uses the DB Daemon and File Daemon services on ARIA server to automatically and seamlessly retrieve dose and structure data as DICOM files. This service was designed to 1) consistently monitor the data access requests from EcCk programs, 2) translate the requests for ARIA daemon services to obtain dose and structure DICOM files, and 3) monitor the process and return the obtained DICOM files back to EcCk programs for plan quality check purposes. Results: EcCk, which was previously designed to only support MOSAIQ TMS and Pinnacle TPS, can now support Varian ARIA and Eclipse. The new EcCk software has been tested and worked well in physics new start plan check, IMRT plan integrity and plan quality checks. Conclusion: Methods and computer programs have been implemented to allow EcCk to support Varian ARIA and Eclipse systems. This project was supported by a research grant from Varian Medical System.

  2. Automatic box loader

    DOE Patents [OSTI]

    Eldridge, Harry H.; Jones, Robert A.; Lindner, Gordon M.; Hight, Paul H.

    1976-01-01

    This invention relates to a system for repetitively forming an assembly consisting of a single layer of tubes and a row of ferromagnetic armatures underlying the same, electromagnetically conveying the resulting assembly to a position overlying a storage box, and depositing the assembly in the box. The system includes means for simultaneously depositing a row of the armatures on the inclined surface of a tube retainer. Tubes then are rolled down the surface to form a single tube layer bridging the armatures. A magnet assembly carrying electromagnets respectively aligned with the armatures is advanced close to the tube layer, and in the course of this advance is angularly displaced to bring the pole pieces of the electromagnets into parallelism with the tube layer. The magnets then are energized to pick up the assembly. The loaded magnet assembly is retracted to a position overlying the box, and during this retraction is again displaced to bring the pole pieces of the electromagnets into a horizontal plane. Means are provided for inserting the loaded electromagnets in the box and then de-energizing the electromagnets to deposit the assembly therein. The system accomplishes the boxing of fragile tubes at relatively high rates. Because the tubes are boxed as separated uniform layers, subsequent unloading operations are facilitated.

  3. Automatic temperature adjustment apparatus

    DOE Patents [OSTI]

    Chaplin, James E.

    1985-01-01

    An apparatus for increasing the efficiency of a conventional central space heating system is disclosed. The temperature of a fluid heating medium is adjusted based on a measurement of the external temperature, and a system parameter. The system parameter is periodically modified based on a closed loop process that monitors the operation of the heating system. This closed loop process provides a heating medium temperature value that is very near the optimum for energy efficiency.

  4. Automatic voltage imbalance detector

    DOE Patents [OSTI]

    Bobbett, Ronald E.; McCormick, J. Byron; Kerwin, William J.

    1984-01-01

    A device for indicating and preventing damage to voltage cells such as galvanic cells and fuel cells connected in series by detecting sequential voltages and comparing these voltages to adjacent voltage cells. The device is implemented by using operational amplifiers and switching circuitry is provided by transistors. The device can be utilized in battery powered electric vehicles to prevent galvanic cell damage and also in series connected fuel cells to prevent fuel cell damage.

  5. Use of an Individual Plant Examination (IPE) to enhance outage management. Phase 1, Interim report

    SciTech Connect (OSTI)

    Putney, B.; Averett, M.; Riley, J.

    1992-10-01

    A comparative emissions study was conducted on combustion products of various solid domestic cooking fuels; the objective was to compare relative levels of organic and inorganic toxic emissions from traditional Pakistani fuels (wood, wood charcoal, and dried animal dung) with manufactured low-rank coal briquettes (Lakhra and Sor-Range coals) under conditions simulating domestic cooking. A small combustion shed 12 m{sup 3} internal volume, air exchange rate 14 h{sup {minus}1} was used to simulate south Asian cooking rooms. 200-g charges of the various fueb were ignited in an Angethi stove located inside the shed, then combusted to completion; effluents from this combustion were monitored as a function of time. Measurements were made of respirable particulates, volatile and semi-volatile organics, CO, SO{sub 2}, and No{sub x}. Overall it appears that emissions from coal briquettes containing combustion amendments (slaked lime, clay, and potassium nitrate oxidizer) are no greater than emissions Erom traditional fuels, and in some cases are significantly lower; generally, emissions are highest for afl fuels in the early stages of combustion.

  6. Use of an Individual Plant Examination (IPE) to enhance outage management

    SciTech Connect (OSTI)

    Putney, B.; Averett, M.; Riley, J. )

    1992-10-01

    A comparative emissions study was conducted on combustion products of various solid domestic cooking fuels; the objective was to compare relative levels of organic and inorganic toxic emissions from traditional Pakistani fuels (wood, wood charcoal, and dried animal dung) with manufactured low-rank coal briquettes (Lakhra and Sor-Range coals) under conditions simulating domestic cooking. A small combustion shed 12 m[sup 3] internal volume, air exchange rate 14 h[sup [minus]1] was used to simulate south Asian cooking rooms. 200-g charges of the various fueb were ignited in an Angethi stove located inside the shed, then combusted to completion; effluents from this combustion were monitored as a function of time. Measurements were made of respirable particulates, volatile and semi-volatile organics, CO, SO[sub 2], and No[sub x]. Overall it appears that emissions from coal briquettes containing combustion amendments (slaked lime, clay, and potassium nitrate oxidizer) are no greater than emissions Erom traditional fuels, and in some cases are significantly lower; generally, emissions are highest for afl fuels in the early stages of combustion.

  7. Status of U.S. Nuclear Outages - U.S. Energy Information Administratio...

    Gasoline and Diesel Fuel Update (EIA)

    Bahamas, The Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Brazil Brunei Bulgaria Burkina Faso Burma (Myanmar) ...

  8. Direct Deposition of Microcolumnar Scintillator on CMOS SSPM Array: Toward a Photon Counting Detector for X-Ray/Gamma Ray Imaging

    SciTech Connect (OSTI)

    Prekas, G.; Breen, M.; Sabet, H.; Bhandari, H.; Derderian, G.; Robertson, F. Jr; Stapels, C. J.; Christian, J.; Cool, S.; Nagarkar, V. V.

    2011-12-13

    We are developing a modular, low-cost, photon-counting detector based on a scintillator coupled to a solid-state photodetector. A working prototype was successfully developed by depositing CsI:Tl directly onto a CMOS SSPM array designed by RMD and custom-fabricated by a commercial foundry. The device comprised a 6x6 array of 1.5x1.5 mm{sup 2} macro-pixels, each containing a 36x36 array of resistively coupled micro-pixels, that was subjected to vapor deposition of columnar CsI:Tl. Direct deposition eliminates the gap between the scintillator and SSPM and creates a better optical bond than does index-matching grease. This paper compares the performance of SSPMs with directly deposited CsI:Tl, in terms of signal-to-noise ratio and light spread, against devices using monolithic single crystals or pixelated single crystals coupled to the SSPM. Due to the reduction in light scattering and optical losses in the interface, the directly deposited CsI:Tl demonstrated significantly better position sensitivity, with at least a factor of 2 increase in SNR compared to a single crystal. These data indicate that a photodetector with substantially smaller macro-pixel dimensions than used here could be used to implement a low-energy X-ray/gamma-ray imaging and spectroscopy detector, particularly for applications where high resolution is of prime importance.

  9. Channel-capacity gain in entanglement-assisted communication protocols based exclusively on linear optics, single-photon inputs, and coincidence photon counting

    SciTech Connect (OSTI)

    Lougovski, P.; Uskov, D. B.

    2015-08-04

    Entanglement can effectively increase communication channel capacity as evidenced by dense coding that predicts a capacity gain of 1 bit when compared to entanglement-free protocols. However, dense coding relies on Bell states and when implemented using photons the capacity gain is bounded by 0.585 bits due to one's inability to discriminate between the four optically encoded Bell states. In this research we study the following question: Are there alternative entanglement-assisted protocols that rely only on linear optics, coincidence photon counting, and separable single-photon input states and at the same time provide a greater capacity gain than 0.585 bits? In this study, we show that besides the Bell states there is a class of bipartite four-mode two-photon entangled states that facilitate an increase in channel capacity. We also discuss how the proposed scheme can be generalized to the case of two-photon N-mode entangled states for N=6,8.

  10. Channel-capacity gain in entanglement-assisted communication protocols based exclusively on linear optics, single-photon inputs, and coincidence photon counting

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Lougovski, P.; Uskov, D. B.

    2015-08-04

    Entanglement can effectively increase communication channel capacity as evidenced by dense coding that predicts a capacity gain of 1 bit when compared to entanglement-free protocols. However, dense coding relies on Bell states and when implemented using photons the capacity gain is bounded by 0.585 bits due to one's inability to discriminate between the four optically encoded Bell states. In this research we study the following question: Are there alternative entanglement-assisted protocols that rely only on linear optics, coincidence photon counting, and separable single-photon input states and at the same time provide a greater capacity gain than 0.585 bits? In thismore » study, we show that besides the Bell states there is a class of bipartite four-mode two-photon entangled states that facilitate an increase in channel capacity. We also discuss how the proposed scheme can be generalized to the case of two-photon N-mode entangled states for N=6,8.« less

  11. ECUADOR: counting down the barrels

    SciTech Connect (OSTI)

    Not Available

    1983-02-09

    Within the world oil market, OPEC faces a reduced role as supplier and production/price dilemmas. One of its members, Ecuador, faces rapid drawdown of its reserves and ultimate loss of membership in the cartel. But Ecuador is tackling the problem by a variety of means and is still defending OPEC prices, as its OPEC Governor tells Energy Detente. The complete interview with Cesar Guerra Navarrete, the OPEC Governor is presented. The Energy Detente fuel price/tax series and the principal industrial fuel prices as of February 1983 are included for countries of the Eastern Hemisphere.

  12. Low Background Counting at LBNL

    Office of Scientific and Technical Information (OSTI)

    ... doi:10.1016Zj.phpro.2014.12.101 788 A.R. Smith et al. Physics Procedia 61 (2015) 787 - ... began in the summer of 1975 when A.R. Smith established contacts there with the ...

  13. Opcode counting for performance measurement

    DOE Patents [OSTI]

    Gara, Alan; Satterfield, David L.; Walkup, Robert E.

    2015-08-11

    Methods, systems and computer program products are disclosed for measuring a performance of a program running on a processing unit of a processing system. In one embodiment, the method comprises informing a logic unit of each instruction in the program that is executed by the processing unit, assigning a weight to each instruction, assigning the instructions to a plurality of groups, and analyzing the plurality of groups to measure one or more metrics. In one embodiment, each instruction includes an operating code portion, and the assigning includes assigning the instructions to the groups based on the operating code portions of the instructions. In an embodiment, each type of instruction is assigned to a respective one of the plurality of groups. These groups may be combined into a plurality of sets of the groups.

  14. Opcode counting for performance measurement

    DOE Patents [OSTI]

    Gara, Alan; Satterfield, David L; Walkup, Robert E

    2013-10-29

    Methods, systems and computer program products are disclosed for measuring a performance of a program running on a processing unit of a processing system. In one embodiment, the method comprises informing a logic unit of each instruction in the program that is executed by the processing unit, assigning a weight to each instruction, assigning the instructions to a plurality of groups, and analyzing the plurality of groups to measure one or more metrics. In one embodiment, each instruction includes an operating code portion, and the assigning includes assigning the instructions to the groups based on the operating code portions of the instructions. In an embodiment, each type of instruction is assigned to a respective one of the plurality of groups. These groups may be combined into a plurality of sets of the groups.

  15. TH-C-18A-01: Is Automatic Tube Current Modulation Still Necessary with Statistical Iterative Reconstruction?

    SciTech Connect (OSTI)

    Li, K; Zhao, W; Gomez-Cardona, D; Chen, G

    2014-06-15

    Purpose: Automatic tube current modulation (TCM) has been widely used in modern multi-detector CT to reduce noise spatial nonuniformity and streaks to improve dose efficiency. With the advent of statistical iterative reconstruction (SIR), it is expected that the importance of TCM may diminish, since SIR incorporates statistical weighting factors to reduce the negative influence of photon-starved rays. The purpose of this work is to address the following questions: Does SIR offer the same benefits as TCM? If yes, are there still any clinical benefits to using TCM? Methods: An anthropomorphic CIRS chest phantom was scanned using a state-of-the-art clinical CT system equipped with an SIR engine (Veo, GE Healthcare). The phantom was first scanned with TCM using a routine protocol and a low-dose (LD) protocol. It was then scanned without TCM using the same protocols. For each acquisition, both FBP and Veo reconstructions were performed. All scans were repeated 50 times to generate an image ensemble from which noise spatial nonuniformity (NSN) and streak artifact levels were quantified. Monte-Carlo experiments were performed to estimate skin dose. Results: For FBP, noise streaks were reduced by 4% using TCM for both routine and LD scans. NSN values were actually slightly higher with TCM (0.25) than without TCM (0.24) for both routine and LD scans. In contrast, for Veo, noise streaks became negligible (<1%) with or without TCM for both routine and LD scans, and the NSN was reduced to 0.10 (low dose) or 0.08 (routine). The overall skin dose was 2% lower at the shoulders and more uniformly distributed across the skin without TCM. Conclusion: SIR without TCM offers superior reduction in noise nonuniformity and streaks relative to FBP with TCM. For some clinical applications in which skin dose may be a concern, SIR without TCM may be a better option. K. Li, W. Zhao, D. Gomez-Cardona: Nothing to disclose; G.-H. Chen: Research funded, General Electric Company Research funded

  16. SU-E-T-596: P3DVHStats - a Novel, Automatic, Institution Customizable Program to Compute and Report DVH Quantities On Philips Pinnacle TPS

    SciTech Connect (OSTI)

    Wu, C

    2015-06-15

    Purpose: To implement a novel, automatic, institutional customizable DVH quantities evaluation and PDF report tool on Philips Pinnacle treatment planning system (TPS) Methods: An add-on program (P3DVHStats) is developed by us to enable automatic DVH quantities evaluation (including both volume and dose based quantities, such as V98, V100, D2), and automatic PDF format report generation, for EMR convenience. The implementation is based on a combination of Philips Pinnacle scripting tool and Java language pre-installed on each Pinnacle Sun Solaris workstation. A single Pinnacle script provide user a convenient access to the program when needed. The activated script will first export DVH data for user selected ROIs from current Pinnacle plan trial; a Java program then provides a simple GUI interface, utilizes the data to compute any user requested DVH quantities, compare with preset institutional DVH planning goals; if accepted by users, the program will also generate a PDF report of the results and export it from Pinnacle to EMR import folder via FTP. Results: The program was tested thoroughly and has been released for clinical use at our institution (Pinnacle Enterprise server with both thin clients and P3PC access), for all dosimetry and physics staff, with excellent feedback. It used to take a few minutes to use MS-Excel worksheet to calculate these DVH quantities for IMRT/VMAT plans, and manually save them as PDF report; with the new program, it literally takes a few mouse clicks in less than 30 seconds to complete the same tasks. Conclusion: A Pinnacle scripting and Java language based program is successfully implemented, customized to our institutional needs. It is shown to dramatically reduce time and effort needed for DVH quantities computing and EMR reporting.

  17. MO-G-BRE-04: Automatic Verification of Daily Treatment Deliveries and Generation of Daily Treatment Reports for a MR Image-Guided Treatment Machine

    SciTech Connect (OSTI)

    Yang, D; Li, X; Li, H; Wooten, H; Green, O; Rodriguez, V; Mutic, S

    2014-06-15

    Purpose: Two aims of this work were to develop a method to automatically verify treatment delivery accuracy immediately after patient treatment and to develop a comprehensive daily treatment report to provide all required information for daily MR-IGRT review. Methods: After systematically analyzing the requirements for treatment delivery verification and understanding the available information from a novel MR-IGRT treatment machine, we designed a method to use 1) treatment plan files, 2) delivery log files, and 3) dosimetric calibration information to verify the accuracy and completeness of daily treatment deliveries. The method verifies the correctness of delivered treatment plans and beams, beam segments, and for each segment, the beam-on time and MLC leaf positions. Composite primary fluence maps are calculated from the MLC leaf positions and the beam-on time. Error statistics are calculated on the fluence difference maps between the plan and the delivery. We also designed the daily treatment delivery report by including all required information for MR-IGRT and physics weekly review - the plan and treatment fraction information, dose verification information, daily patient setup screen captures, and the treatment delivery verification results. Results: The parameters in the log files (e.g. MLC positions) were independently verified and deemed accurate and trustable. A computer program was developed to implement the automatic delivery verification and daily report generation. The program was tested and clinically commissioned with sufficient IMRT and 3D treatment delivery data. The final version has been integrated into a commercial MR-IGRT treatment delivery system. Conclusion: A method was developed to automatically verify MR-IGRT treatment deliveries and generate daily treatment reports. Already in clinical use since December 2013, the system is able to facilitate delivery error detection, and expedite physician daily IGRT review and physicist weekly chart

  18. 2014-09-05 Issuance: Energy Conservation Standards for Automatic Commercial Ice Makers; Notice of Data Availability and Request for Comment

    Broader source: Energy.gov [DOE]

    This document is a pre-publication Federal Register notice of data availability and request for comment regarding energy conservation standards for automatic commercial ice makers, as issued by the Deputy Assistant Secretary for Energy Efficiency on September 5, 2014. Though it is not intended or expected, should any discrepancy occur between the document posted here and the document published in the Federal Register, the Federal Register publication controls. This document is being made available through the Internet solely as a means to facilitate the public's access to this document.

  19. Individual Radiological Protection Monitoring of Utrok Atoll Residents Based on Whole Body Counting of Cesium-137 (137Cs) and Plutonium Bioassay

    SciTech Connect (OSTI)

    Hamilton, T; Kehl, S; Brown, T; Martinelli, R; Hickman, D; Jue, T; Tumey, S; Langston, R

    2007-06-08

    This report contains individual radiological protection surveillance data developed during 2006 for adult members of a select group of families living on Utrok Atoll. These Group I volunteers all underwent a whole-body count to determine levels of internally deposited cesium-137 ({sup 137}Cs) and supplied a bioassay sample for analysis of plutonium isotopes. Measurement data were obtained and the results compared with an equivalent set of measurement data for {sup 137}Cs and plutonium isotopes from a second group of adult volunteers (Group II) who were long-term residents of Utrok Atoll. For the purposes of this comparison, Group II volunteers were considered representative of the general population on Utrok Atoll. The general aim of the study was to determine residual systemic burdens of fallout radionuclides in each volunteer group, develop data in response to addressing some specific concerns about the preferential uptake and potential health consequences of residual fallout radionuclides in Group I volunteers, and generally provide some perspective on the significance of radiation doses delivered to volunteers (and the general Utrok Atoll resident population) in terms of radiological protection standards and health risks. Based on dose estimates from measurements of internally deposited {sup 137}Cs and plutonium isotopes, the data and information developed in this report clearly show that neither volunteer group has acquired levels of internally deposited fallout radionuclides specific to nuclear weapons testing in the Marshall Islands that are likely to have any consequence on human health. Moreover, the dose estimates are well below radiological protection standards as prescribed by U.S. regulators and international agencies, and are very small when compared to doses from natural sources of radiation in the Marshall Islands and the threshold where radiation health effects could be either medically diagnosed in an individual or epidemiologically discerned in a

  20. A study of the utility of heat collectors in reducing the response time of automatic fire sprinklers located in production modules of Building 707

    SciTech Connect (OSTI)

    Shanley, J.H. Jr.; Budnick, E.K. Jr.

    1990-01-01

    Several of the ten production Modules in Building 707 at the Department of Energy Rocky Flats Plant recently underwent an alteration which can adversely affect the performance of the installed automatic fire sprinkler systems. The Modules have an approximate floor to ceiling height of 17.5 ft. The alterations involved removing the drop ceilings in the Modules which had been at a height of 12 ft above the floor. The sprinkler systems were originally installed with the sprinkler heads located below the drop ceiling in accordance with the nationally recognized NFPA 13, Standard for the Installation of Automatic Sprinkler Systems. The ceiling removal affects the sprinkler`s response time and also violates NFPA 13. The scope of this study included evaluation of the feasibility of utilizing heat collectors to reduce the delays in sprinkler response created by the removal of the drop ceilings. The study also includes evaluation of substituting quick response sprinklers for the standard sprinklers currently in place, in combination with a heat collector.

  1. What is the best method to fit time-resolved data? A comparison of the residual minimization and the maximum likelihood techniques as applied to experimental time-correlated, single-photon counting data

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; Smith, Emily A.; Vaswani, Namrata; Petrich, Jacob W.

    2016-02-10

    The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as “residual minimization” (RM) and “maximum likelihood” (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number ofmore » “photon counts” was approximately 20, 200, 1000, 3000, and 6000 and there were about 2–200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson’s weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. Here, the robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.« less

  2. 2D potential measurements by applying automatic beam adjustment system to heavy ion beam probe diagnostic on the Large Helical Device

    SciTech Connect (OSTI)

    Shimizu, A. Ido, T.; Kato, S.; Hamada, Y.; Kurachi, M.; Makino, R.; Nishiura, M.; Nishizawa, A.

    2014-11-15

    Two-dimensional potential profiles in the Large Helical Device (LHD) were measured with heavy ion beam probe (HIBP). To measure the two-dimensional profile, the probe beam energy has to be changed. However, this task is not easy, because the beam transport line of LHD-HIBP system is very long (∼20 m), and the required beam adjustment consumes much time. To reduce the probe beam energy adjustment time, an automatic beam adjustment system has been developed. Using this system, required time to change the probe beam energy is dramatically reduced, such that two-dimensional potential profiles were able to be successfully measured with HIBP by changing the probe beam energy shot to shot.

  3. Performance evaluation of Automatic Extraction System. Volume V. Geotechnical investigations of the roof conditions in the area mined by the AES machine. Final technical report

    SciTech Connect (OSTI)

    Bieniawski, Z.T.; Rafia, F.; Newman, D.A.

    1980-07-01

    This report presents the results of an in-depth geotechnical investigation aimed at assessing the roof, floor, and coal pillar conditions in the area mined by an experimental Automatic Extraction System (AES), built by National Mine Service Co. The study included diamond core drilling, borescope observations, and detailed engineering geological mapping in Consolidation Coal's McElroy coal mine in West Virginia. The field investigations were accompanied by regional geology studies involving aerial photography and lineament analysis as well as by laboratory testing of 103 rock and coal samples. The roof conditions were interpreted by means of an engineering rock mass classification system, known as the Geomechanics Classification. It was found that the roof quality in the areas mined by the AES machine was poor and that the action of the AES support beams could be detrimental to the overall roof stability. Improvements in the procedures for evaluating future AES-type mining are suggested.

  4. SU-E-I-10: Automatic Monitoring of Accumulated Dose Indices From DICOM RDSR to Improve Radiation Safety in X-Ray Angiography

    SciTech Connect (OSTI)

    Omar, A; Bujila, R; Nowik, P; Karambatsakidou, A

    2014-06-01

    Purpose: To investigate the potential benefits of automatic monitoring of accumulated patient and staff dose indicators, i.e., CAK and KAP, from DICOM Radiation Dose Structured Reports (RDSR) in x-ray angiography (XA). Methods: Recently RDSR has enabled the convenient aggregation of dose indices and technique parameters for XA procedures. The information contained in RDSR objects for three XA systems, dedicated to different types of clinical procedures, has been collected and aggregated in a database for over one year using a system developed with open-source software at the Karolinska University Hospital. Patient weight was complemented to the RDSR data via an interface with the Hospital Information System (HIS). Results: The linearly approximated trend in KAP over a time period of a year for cerebrovascular, pelvic/peripheral vascular, and cardiovascular procedures showed a decrease of 12%, 20%, and 14%, respectively. The decrease was mainly due to hardware/software upgrades and new low-dose imaging protocols, and partially due to ongoing systematic radiation safety education of the clinical staff. The CAK was in excess of 3 Gy for 15 procedures, and exceeded 5 Gy for 3 procedures. The dose indices have also shown a significant dependence on patient weight for cardiovascular and pelvic/peripheral vascular procedures; a 10 kg shift in mean patient weight can result in a dose index increase of 25%. Conclusion: Automatic monitoring of accumulated dose indices can be utilized to notify the clinical staff and medical physicists when the dose index has exceeded a predetermined action level. This allows for convenient and systematic follow-up of patients in risk of developing deterministic skin injuries. Furthermore, trend analyses of dose indices over time is a valuable resource for the identification of potential positive or negative effects (dose increase/decrease) from changes in hardware, software, and clinical work habits.

  5. TU-C-17A-04: BEST IN PHYSICS (THERAPY) - A Supervised Framework for Automatic Contour Assessment for Radiotherapy Planning of Head- Neck Cancer

    SciTech Connect (OSTI)

    Chen, H; Kavanaugh, J; Tan, J; Dolly, S; Gay, H; Thorstad, W; Anastasio, M; Altman, M; Mutic, S; Li, H

    2014-06-15

    Purpose: Precise contour delineation of tumor targets and critical structures from CT simulations is essential for accurate radiotherapy (RT) treatment planning. However, manual and automatic delineation processes can be error prone due to limitations in imaging techniques and individual anatomic variability. Tedious and laborious manual verification is hence needed. This study develops a general framework for automatically assessing RT contours for head-neck cancer patients using geometric attribute distribution models (GADMs). Methods: Geometric attributes (centroid and volume) were computed from physician-approved RT contours of 29 head-neck patients. Considering anatomical correlation between neighboring structures, the GADM for each attribute was trained to characterize intra- and interpatient structure variations using principal component analysis. Each trained GADM was scalable and deformable, but constrained by the principal attribute variations of the training contours. A new hierarchical model adaptation algorithm was utilized to assess the RT contour correctness for a given patient. Receiver operating characteristic (ROC) curves were employed to evaluate and tune system parameters for the training models. Results: Experiments utilizing training and non-training data sets with simulated contouring errors were conducted to validate the framework performance. Promising assessment results of contour normality/abnormality for the training contour-based data were achieved with excellent accuracy (0.99), precision (0.99), recall (0.83), and F-score (0.97), while corresponding values of 0.84, 0.96, 0.83, and 0.9 were achieved for the non-training data. Furthermore, the areas under the ROC curves were above 0.9, validating the accuracy of this test. Conclusion: The proposed framework can reliably identify contour normality/abnormality based upon intra- and inter-structure constraints derived from clinically-approved contours. It also allows physicians to

  6. A new generic method for the semi-automatic extraction of river and road networks in low and mid-resolution satellite images

    SciTech Connect (OSTI)

    Grazzini, Jacopo; Dillard, Scott; Soille, Pierre

    2010-10-21

    This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite images. For that purpose, we propose an approach combining concepts arising from mathematical morphology and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following two general assumptions, which are the minimum conditions for a road/river network to be identifiable and are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks, further directional information about the image structures is incorporated. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed with this metric is combined with hydrological operators for overland flow simulation to extract the paths which contain most line evidence and identify them with the target network.

  7. Guidance on the Use of Hand-Held Survey Meters for radiological Triage: Time-Dependent Detector Count Rates Corresponding to 50, 250, and 500 mSv Effective Dose for Adult Males and Adult Females

    SciTech Connect (OSTI)

    Bolch, W.E.; Hurtado, J.L.; Lee, C.; Manger, Ryan P; Hertel, Nolan; Burgett, E.; Dickerson, W.

    2012-01-01

    In June 2006, the Radiation Studies Branch of the Centers for Disease Control and Prevention held a workshop to explore rapid methods of facilitating radiological triage of large numbers of potentially contaminated individuals following detonation of a radiological dispersal device. Two options were discussed. The first was the use of traditional gamma cameras in nuclear medicine departments operated as makeshift wholebody counters. Guidance on this approach is currently available from the CDC. This approach would be feasible if a manageable number of individuals were involved, transportation to the relevant hospitals was quickly provided, and the medical staff at each facility had been previously trained in this non-traditional use of their radiopharmaceutical imaging devices. If, however, substantially larger numbers of individuals (100 s to 1,000 s) needed radiological screening, other options must be given to first responders, first receivers, and health physicists providing medical management. In this study, the second option of the workshop was investigated by the use of commercially available portable survey meters (either NaI or GM based) for assessing potential ranges of effective dose (G50, 50Y250, 250Y500, and 9500 mSv). Two hybrid computational phantoms were used to model an adult male and an adult female subject internally contaminated with 241Am, 60Cs, 137Cs, 131I, or 192Ir following an acute inhalation or ingestion intake. As a function of time following the exposure, the net count rates corresponding to committed effective doses of 50, 250, and 500 mSv were estimated via Monte Carlo radiation transport simulation for each of four different detector types, positions, and screening distances. Measured net count rates can be compared to these values, and an assignment of one of four possible effective dose ranges could be made. The method implicitly assumes that all external contamination has been removed prior to screening and that the measurements be

  8. AUTOMATIC AIR BURST DIRECTION FINDER

    DOE Patents [OSTI]

    Allard, G.A.

    1952-01-31

    This patent application describes an atomic explosion direction indicator comprising a geometric heat-scorchable indicating surface symmetrical about an axis, elevation and azimuth markings on the heat scorchable surface, and an indicating rod at the axis of said surface arranged to cast a shadow hereon, whereby heat from an atomic explosion will scorch a pattern on said surface indicative of the azimuth and elevation of said explosion.

  9. Automatic HTS force measurement instrument

    DOE Patents [OSTI]

    Sanders, Scott T.; Niemann, Ralph C.

    1999-01-01

    A device for measuring the levitation force of a high temperature superconductor sample with respect to a reference magnet includes a receptacle for holding several high temperature superconductor samples each cooled to superconducting temperature. A rotatable carousel successively locates a selected one of the high temperature superconductor samples in registry with the reference magnet. Mechanism varies the distance between one of the high temperature superconductor samples and the reference magnet, and a sensor measures levitation force of the sample as a function of the distance between the reference magnet and the sample. A method is also disclosed.

  10. Automatic HTS force measurement instrument

    DOE Patents [OSTI]

    Sanders, S.T.; Niemann, R.C.

    1999-03-30

    A device is disclosed for measuring the levitation force of a high temperature superconductor sample with respect to a reference magnet includes a receptacle for holding several high temperature superconductor samples each cooled to superconducting temperature. A rotatable carousel successively locates a selected one of the high temperature superconductor samples in registry with the reference magnet. Mechanism varies the distance between one of the high temperature superconductor samples and the reference magnet, and a sensor measures levitation force of the sample as a function of the distance between the reference magnet and the sample. A method is also disclosed. 3 figs.

  11. Department of Energy Order No. 202-05-03 Notice of Planned Outages, in Sequence During January 2006

    Office of Environmental Management (EM)

    Modernize our Nation's Electric Grid | Department of Energy Department of Energy Official Touts Bush Administration's Efforts to Modernize our Nation's Electric Grid Department of Energy Official Touts Bush Administration's Efforts to Modernize our Nation's Electric Grid August 28, 2007 - 11:08am Addthis Louisiana to increase energy efficiency with upgrades between the LaBarre and Metaire electric substations NEW ORLEANS, LA - The U.S. Department of Energy's (DOE) newly confirmed Assistant

  12. Department of Energy Order No. 202-05-03 Revised Notice of Planned Outages During January 2006

    Office of Environmental Management (EM)

  13. Template-based CTA to x-ray angio rigid registration of coronary arteries in frequency domain with automatic x-ray segmentation

    SciTech Connect (OSTI)

    Aksoy, Timur; Unal, Gozde; Demirci, Stefanie; Navab, Nassir; Degertekin, Muzaffer

    2013-10-15

    Purpose: A key challenge for image guided coronary interventions is accurate and absolutely robust image registration bringing together preinterventional information extracted from a three-dimensional (3D) patient scan and live interventional image information. In this paper, the authors present a novel scheme for 3D to two-dimensional (2D) rigid registration of coronary arteries extracted from preoperative image scan (3D) and a single segmented intraoperative x-ray angio frame in frequency and spatial domains for real-time angiography interventions by C-arm fluoroscopy.Methods: Most existing rigid registration approaches require a close initialization due to the abundance of local minima and high complexity of search algorithms. The authors' method eliminates this requirement by transforming the projections into translation-invariant Fourier domain for estimating the 3D pose. For 3D rotation recovery, template Digitally Reconstructed Radiographs (DRR) as candidate poses of 3D vessels of segmented computed tomography angiography are produced by rotating the camera (image intensifier) around the DICOM angle values with a specific range as in C-arm setup. The authors have compared the 3D poses of template DRRs with the segmented x-ray after equalizing the scales in three domains, namely, Fourier magnitude, Fourier phase, and Fourier polar. The best rotation pose candidate was chosen by one of the highest similarity measures returned by the methods in these domains. It has been noted in literature that frequency domain methods are robust against noise and occlusion which was also validated by the authors' results. 3D translation of the volume was then recovered by distance-map based BFGS optimization well suited to convex structure of the authors' objective function without local minima due to distance maps. A novel automatic x-ray vessel segmentation was also performed in this study.Results: Final results were evaluated in 2D projection space for patient data; and

  14. SU-E-CAMPUS-I-04: Automatic Skin-Dose Mapping for An Angiographic System with a Region-Of-Interest, High-Resolution Detector

    SciTech Connect (OSTI)

    Vijayan, S; Rana, V; Setlur Nagesh, S; Ionita, C; Rudin, S; Bednarek, D

    2014-06-15

    Purpose: Our real-time skin dose tracking system (DTS) has been upgraded to monitor dose for the micro-angiographic fluoroscope (MAF), a high-resolution, small field-of-view x-ray detector. Methods: The MAF has been mounted on a changer on a clinical C-Arm gantry so it can be used interchangeably with the standard flat-panel detector (FPD) during neuro-interventional procedures when high resolution is needed in a region-of-interest. To monitor patient skin dose when using the MAF, our DTS has been modified to automatically account for the change in scatter for the very small MAF FOV and to provide separated dose distributions for each detector. The DTS is able to provide a color-coded mapping of the cumulative skin dose on a 3D graphic model of the patient. To determine the correct entrance skin exposure to be applied by the DTS, a correction factor was determined by measuring the exposure at the entrance surface of a skull phantom with an ionization chamber as a function of entrance beam size for various beam filters and kVps. Entrance exposure measurements included primary radiation, patient backscatter and table forward scatter. To allow separation of the dose from each detector, a parameter log is kept that allows a replay of the procedure exposure events and recalculation of the dose components.The graphic display can then be constructed showing the dose distribution from the MAF and FPD separately or together. Results: The DTS is able to provide separate displays of dose for the MAF and FPD with field-size specific scatter corrections. These measured corrections change from about 49% down to 10% when changing from the FPD to the MAF. Conclusion: The upgraded DTS allows identification of the patient skin dose delivered when using each detector in order to achieve improved dose management as well as to facilitate peak skin-dose reduction through dose spreading. Research supported in part by Toshiba Medical Systems Corporation and NIH Grants R43FD0158401, R44FD

  15. SU-E-T-361: Clinical Benefit of Automatic Beam Gating Mixed with Breath Hold in Radiation Therapy of Left Breast

    SciTech Connect (OSTI)

    Wu, J; Hill, G; Spiegel, J; Ye, J; Mehta, V

    2014-06-01

    Purpose: To investigate the clinical and dosimetric benefits of automatic gating of left breast mixed with breath-hold technique. Methods: Two Active Breathing Control systems, ABC2.0 and ABC3.0, were used during simulation and treatment delivery. The two systems are different such that ABC2.0 is a breath-hold system without beam control capability, while ABC3.0 has capability in both breath-hold and beam gating. At simulation, each patient was scanned twice: one with free breathing (FB) and one with breath hold through ABC. Treatment plan was generated on the CT with ABC. The same plan was also recalculated on the CT with FB. These two plans were compared to assess plan quality. For treatments with ABC2.0, beams with MU > 55 were manually split into multiple subfields. All subfields were identical and shared the total MU. For treatment with ABC3.0, beam splitting was unnecessary. Instead, treatment was delivered in gating mode mixed with breath-hold technique. Treatment delivery efficiency using the two systems was compared. Results: The prescribed dose was 50.4Gy at 1.8Gy/fraction. The maximum heart dose averaged over 10 patients was 46.02.5Gy and 24.512.2Gy for treatments with FB and with ABC respectively. The corresponding heart V10 was 13.23.6% and 1.01.6% respectively. The averaged MUs were 99.87.5 for LMT, 99.29.4 for LLT. For treatment with ABC2.0, normally the original beam was split into 2 subfields. The averaged total time to delivery all beams was 4.30.4min for treatments with ABC2.0 and 3.30.6min for treatments with ABC3.0 in gating mode. Conclusion: Treatment with ABC tremendously reduced heart dose. Compared to treatments with ABC2.0, gating with ABC3.0 reduced the total treatment time by 23%. Use of ABC3.0 improved the delivery efficiency, and eliminated the possibility of mistreatments. The latter may happen with ABC2.0 where beam is not terminated when breath signal falls outside of the treatment window.

  16. Deformable image registration based automatic CT-to-CT contour propagation for head and neck adaptive radiotherapy in the routine clinical setting

    SciTech Connect (OSTI)

    Kumarasiri, Akila Siddiqui, Farzan; Liu, Chang; Yechieli, Raphael; Shah, Mira; Pradhan, Deepak; Zhong, Hualiang; Chetty, Indrin J.; Kim, Jinkoo

    2014-12-15

    Purpose: To evaluate the clinical potential of deformable image registration (DIR)-based automatic propagation of physician-drawn contours from a planning CT to midtreatment CT images for head and neck (H and N) adaptive radiotherapy. Methods: Ten H and N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken approximately 3–4 week into treatment, were considered retrospectively. Clinically relevant organs and targets were manually delineated by a radiation oncologist on both sets of images. Four commercial DIR algorithms, two B-spline-based and two Demons-based, were used to deform CT1 and the relevant contour sets onto corresponding CT2 images. Agreement of the propagated contours with manually drawn contours on CT2 was visually rated by four radiation oncologists in a scale from 1 to 5, the volume overlap was quantified using Dice coefficients, and a distance analysis was done using center of mass (CoM) displacements and Hausdorff distances (HDs). Performance of these four commercial algorithms was validated using a parameter-optimized Elastix DIR algorithm. Results: All algorithms attained Dice coefficients of >0.85 for organs with clear boundaries and those with volumes >9 cm{sup 3}. Organs with volumes <3 cm{sup 3} and/or those with poorly defined boundaries showed Dice coefficients of ∼0.5–0.6. For the propagation of small organs (<3 cm{sup 3}), the B-spline-based algorithms showed higher mean Dice values (Dice = 0.60) than the Demons-based algorithms (Dice = 0.54). For the gross and planning target volumes, the respective mean Dice coefficients were 0.8 and 0.9. There was no statistically significant difference in the Dice coefficients, CoM, or HD among investigated DIR algorithms. The mean radiation oncologist visual scores of the four algorithms ranged from 3.2 to 3.8, which indicated that the quality of transferred contours was “clinically acceptable with minor modification or major modification in a small number of contours

  17. Opportunities for Decay Counting of Environmental Radioisotopes...

    Office of Scientific and Technical Information (OSTI)

    Close Cite: Bibtex Format Close 0 pages in this document matching the terms "" Search For Terms: Enter terms in the toolbar above to search the full text of this document for ...

  18. Two Million Smart Meters and Counting

    Office of Energy Efficiency and Renewable Energy (EERE)

    A major milestone towards the implementation of smart grid technologies, which an analysis found could reduce electricity use by more than four percent annually by 2030 –- that would mean consumers around the country would see savings of over $20 billion each year.

  19. Improving Photoelectron Counting and Particle Identification...

    Office of Scientific and Technical Information (OSTI)

    ; Gorel, P. ; Graham, K. ; Grace, E. ; Guerrero, N. ; Guiseppe, V. ; Hallin, A. L. ; Harvey, P. ; Hearns, C. ; Henning, R. ; Hime, Andrew ; Hofgartner, J. ; Jaditz, S. ; ...

  20. 2013 Feds Feed Families: Your Generosity Counts

    Broader source: Energy.gov [DOE]

    Federal employees are asked to help local food banks replenish supplies for the hungry during the summer months.

  1. Counting molecular-beam grown graphene layers

    SciTech Connect (OSTI)

    Plaut, Annette S.; Wurstbauer, Ulrich; Pinczuk, Aron; Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027 ; Garcia, Jorge M.; Pfeiffer, Loren N.

    2013-06-17

    We have used the ratio of the integrated intensity of graphene's Raman G peak to that of the silicon substrate's first-order optical phonon peak, accurately to determine the number of graphene layers across our molecular-beam (MB) grown graphene films. We find that these results agree well both, with those from our own exfoliated single and few-layer graphene flakes, and with the results of Koh et al.[ACS Nano 5, 269 (2011)]. We hence distinguish regions of single-, bi-, tri-, four-layer, etc., graphene, consecutively, as we scan coarsely across our MB-grown graphene. This is the first, but crucial, step to being able to grow, by such molecular-beam-techniques, a specified number of large-area graphene layers, to order.

  2. Counting graphene layers with very slow electrons

    SciTech Connect (OSTI)

    Frank, Lud?k; Mikmekov, Elika; Mllerov, Ilona; Lejeune, Michal

    2015-01-05

    The study aimed at collection of data regarding the transmissivity of freestanding graphene for electrons across their full energy scale down to the lowest energies. Here, we show that the electron transmissivity of graphene drops with the decreasing energy of the electrons and remains below 10% for energies below 30?eV, and that the slow electron transmissivity value is suitable for reliable determination of the number of graphene layers. Moreover, electrons incident below 50?eV release adsorbed hydrocarbon molecules and effectively clean graphene in contrast to faster electrons that decompose these molecules and create carbonaceous contamination.

  3. Count Directive No. Title OPI Approval Date

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Preservation Award | Department of Energy Council Honors Oak Ridge's EM Program with Prestigious Historic Preservation Award Council Honors Oak Ridge's EM Program with Prestigious Historic Preservation Award August 15, 2016 - 12:35pm Addthis Left to right, ACHP Chairman Milford Wayne Donaldson; Knox Heritage Executive Director Kim Trent; OREM Manager Sue Cange; and ACHP Expert Member Robert Stanton with the Chairman’s Award for Achievement in Historic Preservation. Left to right, ACHP

  4. Business Owners: Respond to an Energy Emergency | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Respond to an Energy Emergency Business Owners: Respond to an Energy Emergency Business Owners: Respond to an Energy Emergency Ensure your building is safe to occupy-Initially allow only essential, critical-operations staff into restricted areas. Ask your local or State health department for guidance on determining the safety of your building. Decide whether to activate backup power-If your backup generator doesn't automatically turn on during a power outage, you'll have to determine when to

  5. Navasota Valley Elec Coop, Inc | Open Energy Information

    Open Energy Info (EERE)

    Website: www.navasotavalley.com Facebook: https:www.facebook.comnavasotavalley Outage Hotline: 1-800-443-9462 Outage Map: outages.navasotavalley.com:85 References: EIA...

  6. Clark Energy Coop Inc | Open Energy Information

    Open Energy Info (EERE)

    Facebook: https:www.facebook.compagesClark-Energy-Coop573449969388809 Outage Hotline: 1-800-992-3269 Outage Map: outage.clarkenergy.com References: EIA Form...

  7. Bluebonnet Electric Coop, Inc | Open Energy Information

    Open Energy Info (EERE)

    Territory: Texas Phone Number: 800-949-4414 Website: www.bluebonnetelectric.coop Outage Hotline: 800-949-4414 Outage Map: outage.bluebonnetelectric.coop References: EIA Form...

  8. Minnesota Valley Electric Coop | Open Energy Information

    Open Energy Info (EERE)

    https:www.facebook.compagesMinnesota-Valley-Electric-Cooperative212971310374 Outage Hotline: 1-800-232-2328 Outage Map: outage.mvec.net References: EIA Form EIA-861...

  9. Itasca-Mantrap Co-op Electrical Assn | Open Energy Information

    Open Energy Info (EERE)

    www.facebook.compagesItasca-Mantrap-Co-op-Electrical-Association443726809007201?refstream Outage Hotline: 8887133377 Outage Map: outage.itasca-mantrap.com References:...

  10. Jefferson Electric Member Corp | Open Energy Information

    Open Energy Info (EERE)

    Facebook: https:www.facebook.compagesJefferson-Energy-Cooperative256668934485611?refsettings Outage Hotline: 1-877-JEFFERSON; 706-547-2167 Outage Map: outage.jec.coop:83...

  11. Cumberland Valley Rural E C C (Tennessee) | Open Energy Information

    Open Energy Info (EERE)

    Twitter: @CVECoop Facebook: https:ebill.cumberlandvalley.coopwoViewermapviewer.html?configOutage+Web+Map Outage Hotline: 800-513-2677 Outage Map: ebill.cumberlandvalley.c...

  12. Ouachita Electric Coop Corp | Open Energy Information

    Open Energy Info (EERE)

    www.facebook.compagesOuachita-Electric-Cooperative-Corporation161532627349323?frefphoto Outage Hotline: 1-877-252-4538 Outage Map: www.oecc.comview-outage References: EIA...

  13. Tri-County Electric Coop, Inc (Florida) | Open Energy Information

    Open Energy Info (EERE)

    Website: www.tcec.com Facebook: https:www.facebook.comTriCountyElectricFlorida Outage Hotline: Outage Reporting: 1-800-999-2285 Outage Map: outage.tcec.com References: EIA...

  14. Automatic monitoring of vibration welding equipment (Patent)...

    Office of Scientific and Technical Information (OSTI)

    A vibration welding system includes vibration welding equipment having a welding horn and anvil, a host device, a check station, and a robot. The robot moves the horn and anvil via ...

  15. Evaluation of an automatic uranium titration system

    SciTech Connect (OSTI)

    Lewis, K.

    1980-01-01

    The titration system utilizes the constant current coulometric titration of Goldbeck and Lerner. U(VI) is reduced to U(IV) by Fe(II). V(V) is generated to titrate the U(IV), and the titration is followed potentiometrically. The evaluation shows that the recovery of uranium is 100% at the 40-mg level. The accuracy is generally +-0.10% or better. The smallest sample weight at which reliable results were obtained was 40 mg of uranium. Time for one analysis is 15 minutes. Advantages and disadvantages of the automated titrator are listed. (DLC)

  16. Clothes Dryer Automatic Termination Sensor Evaluation

    SciTech Connect (OSTI)

    TeGrotenhuis, Ward E.

    2014-10-01

    Volume 1: Characterization of Energy Use in Residential Clothes Dryers. The efficacy and energy efficiency of clothes dryers are studied in this evaluation.

  17. FABLE - automatic Fortran to C++ conversion

    Energy Science and Technology Software Center (OSTI)

    2010-08-09

    FABLE is a Fortran to C++ source-to-source conversion tool. This enables the continued development of new methods even while switching programming languages. FABLE is available under a nonrestrictive open source license. In FABLE the analysis of the Fortran source is separated from the generation of the C++ source. Therefore parts of FABLE could be reused for other target languages. Hardware req.: Multi-platform; types of files: source code, sample problem Input data,sample problem output, installation instructions,more » user guide.« less

  18. Temperature actuated automatic safety rod release

    DOE Patents [OSTI]

    Hutter, Ernest; Pardini, John A.; Walker, David E.

    1987-01-01

    A temperature-actuated apparatus is disclosed for releasably supporting a safety rod in a nuclear reactor, comprising a safety rod upper adapter having a retention means, a drive shaft which houses the upper adapter, and a bimetallic means supported within the drive shaft and having at least one ledge which engages a retention means of the safety rod upper adapter. A pre-determined increase in temperature causes the bimetallic means to deform so that the ledge disengages from the retention means, whereby the bimetallic means releases the safety rod into the core of the reactor.

  19. Temperature actuated automatic safety rod release

    DOE Patents [OSTI]

    Hutter, E.; Pardini, J.A.; Walker, D.E.

    1984-03-13

    A temperature-actuated apparatus is disclosed for releasably supporting a safety rod in a nuclear reactor, comprising a safety rod upper adapter having a retention means, a drive shaft which houses the upper adapter, and a bimetallic means supported within the drive shaft and having at least one ledge which engages a retention means of the safety rod upper adapter. A pre-determined increase in temperature causes the bimetallic means to deform so that the ledge disengages from the retention means, whereby the bimetallic means releases the safety rod into the core of the reactor.

  20. TUNE: Compiler-Directed Automatic Performance Tuning

    SciTech Connect (OSTI)

    Hall, Mary

    2014-09-18

    This project has developed compiler-directed performance tuning technology targeting the Cray XT4 Jaguar system at Oak Ridge, which has multi-core Opteron nodes with SSE-3 SIMD extensions, and the Cray XE6 Hopper system at NERSC. To achieve this goal, we combined compiler technology for model-guided empirical optimization for memory hierarchies with SIMD code generation, which have been developed by the PIs over the past several years. We examined DOE Office of Science applications to identify performance bottlenecks and apply our system to computational kernels that operate on dense arrays. Our goal for this performance-tuning technology has been to yield hand-tuned levels of performance on DOE Office of Science computational kernels, while allowing application programmers to specify their computations at a high level without requiring manual optimization. Overall, we aim to make our technology for SIMD code generation and memory hierarchy optimization a crucial component of high-productivity Petaflops computing through a close collaboration with the scientists in national laboratories.

  1. Automatically woven three-directional composite structures

    SciTech Connect (OSTI)

    Bruno, P.S.; Keith, D.O.; Vicario, A.A. Jr.

    1986-07-01

    Three-directional (3-D) fiber reinforced composites were demonstrated with advantages for certain missile and space structures. The applications range from carbon-carbon (c-c) to carbon-epoxy structures. 3-D carbon fiber preforms were woven using automated techniques developed by Aerospatiale of France and then impregnated and processed into c-c or carbon-epoxy structures. Demonstrated structures include c-c ITEs and exit cones for rocket nozzles and carbon-epoxy adapter rings for rocket cases. Other potential applications, including satellite truss joints and meteroid impact shields for space station components, are identified. Advantages of these structures include automated fabrication, improved mechanical properties, and greater reliability. 16 figures, 1 table.

  2. Engine brake control in automatic transmission

    SciTech Connect (OSTI)

    Hayasaki, K.; Sugano, K.

    1988-09-13

    This patent describes an engine braking control for a transmission for an automotive vehicle having an engine, the transmission including an input member drivingly coupled to the engine and an output member subject to load from driving wheels of the automotive vehicle, the transmission also including a first rotary member, a second rotary member, a hydraulically operated clutch selectively establishing a drive connection between the first rotary member and the second rotary member, and a one-way clutch arranged in parallel to the hydraulically operated clutch such that when the hydraulically operated clutch is released, the one-way clutch transmits forward torque from the first rotary member to the second rotary member, but interrupts transmission of revers torque to the first rotary member from the second rotary member, the engine braking control comprising: means for providing an engine braking command fluid pressure signal when demanded by a vehicle operator; a valve means for normally discharging hydraulic fluid from the hydraulically operated clutch to deactivate the hydraulically operated clutch, the valve means being fluidly connected to the hydraulically operated clutch, the engine braking command fluid pressure signal providing means and a drain port. The valve means including a valve spool having a first position where the hydraulically operated clutch is allowed to communicate with the drain port to permit discharge of hydraulic fluid therefrom and thus the hydraulically operated clutch is caused to be deactivated and a second position where the hydraulically operated clutch is disconnected from the drain port and allowed to communicate with the engine braking command fluid pressure signal.

  3. Automatically closing swing gate closure assembly

    DOE Patents [OSTI]

    Chang, Shih-Chih; Schuck, William J.; Gilmore, Richard F.

    1988-01-01

    A swing gate closure assembly for nuclear reactor tipoff assembly wherein the swing gate is cammed open by a fuel element or spacer but is reliably closed at a desired closing rate primarily by hydraulic forces in the absence of a fuel charge.

  4. Declarative camera control for automatic cinematography

    SciTech Connect (OSTI)

    Christianson, D.B.; Anderson, S.E.; Li-wei He

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  5. Methods for automatic trigger threshold adjustment

    DOE Patents [OSTI]

    Welch, Benjamin J; Partridge, Michael E

    2014-03-18

    Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.

  6. Automatic annotation of organellar genomes with DOGMA

    SciTech Connect (OSTI)

    Wyman, Stacia; Jansen, Robert K.; Boore, Jeffrey L.

    2004-06-01

    Dual Organellar GenoMe Annotator (DOGMA) automates the annotation of extra-nuclear organellar (chloroplast and animal mitochondrial) genomes. It is a web-based package that allows the use of comparative BLAST searches to identify and annotate genes in a genome. DOGMA presents a list of putative genes to the user in a graphical format for viewing and editing. Annotations are stored on our password-protected server. Complete annotations can be extracted for direct submission to GenBank. Furthermore, intergenic regions of specified length can be extracted, as well the nucleotide sequences and amino acid sequences of the genes.

  7. Automatic monitoring of vibration welding equipment

    DOE Patents [OSTI]

    Spicer, John Patrick; Chakraborty, Debejyo; Wincek, Michael Anthony; Wang, Hui; Abell, Jeffrey A; Bracey, Jennifer; Cai, Wayne W

    2014-10-14

    A vibration welding system includes vibration welding equipment having a welding horn and anvil, a host device, a check station, and a robot. The robot moves the horn and anvil via an arm to the check station. Sensors, e.g., temperature sensors, are positioned with respect to the welding equipment. Additional sensors are positioned with respect to the check station, including a pressure-sensitive array. The host device, which monitors a condition of the welding equipment, measures signals via the sensors positioned with respect to the welding equipment when the horn is actively forming a weld. The robot moves the horn and anvil to the check station, activates the check station sensors at the check station, and determines a condition of the welding equipment by processing the received signals. Acoustic, force, temperature, displacement, amplitude, and/or attitude/gyroscopic sensors may be used.

  8. Remote Automatic Material On-Line Sensor

    SciTech Connect (OSTI)

    Magnuson, Erik

    2005-12-20

    Low cost NMR sensor for measuring moisture content of forest products. The Department of Energy (DOE) Industries of the Future (IOF) program seeks development and implementation of technologies that make industry more efficient--in particular, more energy-efficient. Quantum Magnetics, Inc. (QM), a wholly-owned subsidiary of GE Security, received an award under the program to investigate roles for low-cost Nuclear Magnetic Resonance (NMR) technology in furtherance of these goals. Most NMR systems are designed for high-resolution spectroscopy applications. These systems use intense magnetic fields produced by superconducting magnets that drive price and operating cost to levels beyond industry tolerance. At low magnetic fields, achievable at low cost, one loses the ability to obtain spectroscopic information. However, measuring the time constants associated with the NMR signal, called NMR relaxometry, gives indications of chemical and physical states of interest to process control and optimization. It was the purpose of this effort to investigate the technical and economic feasibility of using such low-field, low-cost NMR to monitor parameters enabling greater process efficiencies. The primary target industry identified in the Cooperative Development Agreement was the wood industry, where the moisture content of wood is a key process parameter from the time the cut tree enters a mill until the time it is delivered as pieces of lumber. Extracting the moisture is energy consuming, and improvements in drying efficiency stand to reduce costs and emissions substantially. QM designed and developed a new, low-cost NMR instrument suitable for inspecting lumber up to 3 inches by 12 inches in cross section, and other materials of similar size. Low cost is achieved via an inexpensive, permanent magnet and low-cost NMR spectrometer electronics. Laboratory testing demonstrated that the NMR system is capable of accurate ({+-} 0.5%) measurements of the moisture content of wood for moisture ranging from 2% to over 140% (referenced to the wood's dry weight). Accuracy exceeded that offered by existing instrumentation when the moisture content was in excess of the fiber saturation point ({approx}20%). Accuracy was independent of the wood form: solid wood, wood chips or sawdust. The prototype NMR system was designed and built for incorporation and use in a beta test site. Beta testing is under way at the pilot plant operated by the Pulp and Paper Research Institute of Canada (PAPRICAN) in Vancouver, B.C. Other industries were also investigated. For example, laboratory testing demonstrated that low-field NMR is capable of measuring the hydrogen content of calcium oxide (quicklime). Hydrogen content measurement can be done both rapidly (on the order of 1 second) and nondestructively. Measurement of moisture in quicklime affects energy consumption in the steel industry. Further advances in system electronics, ongoing under DOD support, will enable yet more substantial system cost reductions over the prototype system, opening up a wider range of utility.

  9. Reaction Mechanism Generator: Automatic construction of chemical...

    Office of Scientific and Technical Information (OSTI)

    Additional Journal Information: Journal Volume: 203; Journal Issue: C; Journal ID: ISSN 0010-4655 Publisher: Elsevier Sponsoring Org: USDOE Office of Science (SC), Basic Energy ...

  10. Blank fire configuration for automatic pistol

    DOE Patents [OSTI]

    Teague, Tommy L.

    1990-01-01

    A pistol configured to fire blank cartridges includes a modified barrel with a breech portion connected to an aligned inner sleeve. Around the inner sleeve, there is disposed an outer sleeve having a vent therein through which the cartridge discharges. The breech portion is connected to a barrel anchor to move backward in a slight arc when the pistol is fired. A spring retention rod projects from the barrel anchor and receives a shortened recoil spring therearound which recoil spring has one end abutting a stop on the barrel anchor and the other end in abutment with the end of a spring retaining cup. The spring retaining cup is engaged by a flange projecting from a slide so that when the pistol is fired, the slide moves rearwardly against the compression of the spring to eject the spent cartridge and then moves forwardly under the urging of the spring to load a fresh cartridge into the breech portion. The spring then returns all of the slidable elements to their initial position so that the pistol may again be fired.

  11. Automatic control of oscillatory penetration apparatus

    SciTech Connect (OSTI)

    Lucon, Peter A

    2015-01-06

    A system and method for controlling an oscillatory penetration apparatus. An embodiment is a system and method for controlling a sonic drill having a displacement and an operating range and operating at a phase difference, said sonic drill comprising a push-pull piston and eccentrics, said method comprising: operating the push-pull piston at an initial push-pull force while the eccentrics are operated at a plurality of different operating frequencies within the operating range of the sonic drill and measuring the displacement at each operating frequency; determining an efficient operating frequency for the material being drilled and operating the eccentrics at said efficient operating frequency; determining the phase difference at which the sonic drill is operating; and if the phase difference is not substantially equal to minus ninety degrees, operating the push-pull piston at another push-pull force.

  12. Human-system Interfaces for Automatic Systems

    SciTech Connect (OSTI)

    OHara, J.M.; Higgins,J.; Fleger, S.; Barnes V.

    2010-11-07

    Automation is ubiquitous in modern complex systems, and commercial nuclear- power plants are no exception. Automation is applied to a wide range of functions including monitoring and detection, situation assessment, response planning, and response implementation. Automation has become a 'team player' supporting personnel in nearly all aspects of system operation. In light of its increasing use and importance in new- and future-plants, guidance is needed to conduct safety reviews of the operator's interface with automation. The objective of this research was to develop such guidance. We first characterized the important HFE aspects of automation, including six dimensions: levels, functions, processes, modes, flexibility, and reliability. Next, we reviewed literature on the effects of all of these aspects of automation on human performance, and on the design of human-system interfaces (HSIs). Then, we used this technical basis established from the literature to identify general principles for human-automation interaction and to develop review guidelines. The guidelines consist of the following seven topics: automation displays, interaction and control, automation modes, automation levels, adaptive automation, error tolerance and failure management, and HSI integration. In addition, our study identified several topics for additional research.

  13. Pinpointing the cause of an outage for something as complex and interconnected as the high voltage transmission system is a ve

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Ellen P. Vancko evancko@nerc.com Electric System Update: Sunday August 17, 2003, 5:00 p.m. The electric transmission system is now operating reliably. All electric power transmission lines that were removed from service during the blackout on August 14, 2003, have been returned to service with one exception. The lines between Michigan and Ontario remain out of service due to operational security reasons; however, they are expected to be returned to service later this evening. Most of the

  14. VERDE Analytic Modules

    Energy Science and Technology Software Center (OSTI)

    2008-01-15

    The Verde Analytic Modules permit the user to ingest openly available data feeds about phenomenology (storm tracks, wind, precipitation, earthquake, wildfires, and similar natural and manmade power grid disruptions and forecast power outages, restoration times, customers outaged, and key facilities that will lose power. Damage areas are predicted using historic damage criteria of the affected area. The modules use a cellular automata approach to estimating the distribution circuits assigned to geo-located substations. Population estimates servedmore » within the service areas are located within 1 km grid cells and converted to customer counts by conversion through demographic estimation of households and commercial firms within the population cells. Restoration times are estimated by agent-based simulation of restoration crews working according to utility published prioritization calibrated by historic performance.« less

  15. Nissan: ISO 50001 - What Counts! | Department of Energy

    Energy Savers [EERE]

    Nick Sinai About Us Nick Sinai - U.S. Deputy Chief Technology Officer, White House Office of Science and Technology Policy Nick Sinai is the U.S. Deputy CTO, White House Office of Science and Technology Policy and the former Energy and Environment Director for the Federal Communications Commission. Most Recent Expanded "Green Button" Will Reach Federal Agencies and More American Energy Consumers December 6 Green Button: Enabling Energy Innovation May 9 Better Building Federal Award

  16. When DNA Needs to Stand Up and Be Counted

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    scientists. Schematic of DNA structures in various conformations on a gold surface. Differences in overall structure and orientation are emphasized by color-coding of DNA...

  17. Abbreviated Total-Count Logging Procedures for Use in Remedial...

    Broader source: Energy.gov (indexed) [DOE]

    More Documents & Publications Field Calibration Facilities for Environmental Measurement of Radium, Thorium, and Potassium (October 2013) Gamma Survey of a Permeable Reactive ...

  18. Development of the SRS environmental counting laboratory gamma spectroscopy system

    SciTech Connect (OSTI)

    Filler, D.A.; Crandall, B.S.

    1996-12-31

    The Savannah River Site (SRS), one of several U.S. Department of Energy (DOE) facilities, produces nuclear materials for national defense and for other government and civilian uses. SRS ceased production of defense materials in 1988, and the site`s main activities now involve waste management and environmental restoration. These programs have generated extensive effluent monitoring and environmental surveillance programs for the Environmental Monitoring Section (EMS), which performs {approximately}105,000 radiological analyses on 30,000 samples a year. Gamma spectroscopy is performed on an estimated 10,000 samples annually. This report describes a program to develop and improve the EMS system.

  19. A New Shallow Underground Gas-Proportional Counting Lab - First...

    Office of Scientific and Technical Information (OSTI)

    obtain a copy of this journal article from the publisher. Find in Google Scholar Find in Google Scholar Search WorldCat Search WorldCat to find libraries that may hold this journal

  20. Particle Energy Spectrum, Revisited from a Counting Statistics Perspective

    SciTech Connect (OSTI)

    Yuan, D., Marks, D. G., Guss, P. P.

    2012-07-16

    This document is a slide show type presentation of a new covariance estimation for gamma spectra and neutron cross section.