skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Quantifying the Human Dimension through Methodology and Technology.

Abstract

Abstract not provided.

Authors:
Publication Date:
Research Org.:
Sandia National Lab. (SNL-CA), Livermore, CA (United States)
Sponsoring Org.:
USDOE National Nuclear Security Administration (NNSA)
OSTI Identifier:
1372170
Report Number(s):
SAND2016-6747C
645222
DOE Contract Number:
AC04-94AL85000
Resource Type:
Conference
Resource Relation:
Conference: Proposed for presentation at the Human Computer Interaction International held July 17-22, 2016 in Toronto, Canada.
Country of Publication:
United States
Language:
English

Citation Formats

Avina, Glory Emmanuel. Quantifying the Human Dimension through Methodology and Technology.. United States: N. p., 2016. Web.
Avina, Glory Emmanuel. Quantifying the Human Dimension through Methodology and Technology.. United States.
Avina, Glory Emmanuel. 2016. "Quantifying the Human Dimension through Methodology and Technology.". United States. doi:. https://www.osti.gov/servlets/purl/1372170.
@article{osti_1372170,
title = {Quantifying the Human Dimension through Methodology and Technology.},
author = {Avina, Glory Emmanuel},
abstractNote = {Abstract not provided.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = 2016,
month = 7
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share:
  • To support the development of a refined human reliability analysis (HRA) framework, to address identified HRA user needs and improve HRA modeling, unique aspects of human performance have been identified from an analysis of actual plant-specific events. Through the use of the refined framework, relationships between the following HRA, human factors and probabilistic risk assessment (PRA) elements were described: the PRA model, plant states, plant conditions, PRA basic events, unsafe human actions, error mechanisms, and performance shaping factors (PSFs). The event analyses performed in the context of the refined HRA framework, identified the need for new HRA methods that aremore » capable of: evaluating a range of different error mechanisms (e.g., slips as well as mistakes); addressing errors of commission (EOCs) and dependencies between human actions; and incorporating the influence of plant conditions and multiple PSFs on human actions. This report discusses the results of the assessment of user needs, the refinement of the existing HRA framework, as well as, the current status on EOCs, and human dependencies.« less
  • In August 1988, the Nuclear Regulatory Commission (NRC) approved the final version of a revised rule on the acceptance of emergency core cooling systems (ECCS) entitled ''Emergency Core Cooling System; Revisions to Acceptance Criteria.'' The revised rule states an alternate ECCS performance analysis, based on best-estimate methods, may be used to provide more realistic estimates of plant safety margins, provided the licensee quantifies the uncertainty of the estimates and included that uncertainty when comparing the calculated results with prescribed acceptance limits. To support the revised ECCS rule, the NRC and its contractors and consultants have developed and demonstrated a methodmore » called the Code Scaling, Applicability, and Uncertainty (CSAU) evaluation methodology. It is an auditable, traceable, and practical method for combining quantitative analyses and expert opinions to arrive at computed values of uncertainty. This paper provides an overview of the CSAU evaluation methodology and its application to a postulated cold-leg, large-break loss-of-coolant accident in a Westinghouse four-loop pressurized water reactor with 17 /times/ 17 fuel. The code selected for this demonstration of the CSAU methodology was TRAC-PF1/MOD1, Version 14.3. 23 refs., 5 figs., 1 tab.« less
  • Architectural coatings (referred to as paints), with the thinners/reducers and cleanup solvents used during their application, contain volatile organic compounds (VOCs) which are precursors to ground level ozone formation. Some of these paint compounds create hazardous air pollutants (HAPs) which are toxic. The nationally recommended emission factor (EF) of 4.6 lbs/year per capita is based on data from the 1970s. This paper documents the methodologies and the National Paint & Coatings Association sets used to develop revised per capita emissions factors (e.g. 3.6 lbs/year per capita for 1993) for estimating and forecasting the VOC air emissions from the area sourcemore » category of architectural coatings. Emissions estimates, forecasts, trends and reasons for these trends are presented. Future emissions inventory (EI) challenges are addressed in light of data availability, information networks and the proposed category of Architectural and Industrial Maintenance (AIM) coatings.« less
  • Automobile refinishing coatings (referred to as paints), paint thinners, reducers, hardeners, catalysts, and cleanup solvents used during their application, contain volatile organic compounds (VOCs) which are precursors to ground level ozone formation. Some of these painting compounds create hazardous air pollutants (HAPs) which are toxic. This paper documents the methodology, data sets, and the results of surveys (conducted in the fall of 1995) used to develop revised per capita emissions factors for estimating and forecasting the VOC air emissions from the area source category of automobile refinishing. Emissions estimates, forecasts, trends, and reasons for these trends are presented. Future emissionsmore » inventory (EI) challenges are addressed in light of data availability and information networks.« less
  • Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less