skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Validating induced seismicity forecast models-Induced Seismicity Test Bench: INDUCED SEISMICITY TEST BENCH

Authors:
 [1];  [1];  [2];  [1];  [1];  [2]
  1. Swiss Seismological Service, ETH Zurich, Zurich Switzerland
  2. Swiss Competence Center for Energy Research (SCCER-SoE), ETH Zurich, Zurich Switzerland
Publication Date:
Sponsoring Org.:
USDOE Office of Energy Efficiency and Renewable Energy (EERE), Geothermal Technologies Office (EE-4G)
OSTI Identifier:
1402133
Resource Type:
Journal Article: Publisher's Accepted Manuscript
Journal Name:
Journal of Geophysical Research. Solid Earth
Additional Journal Information:
Journal Volume: 121; Journal Issue: 8; Related Information: CHORUS Timestamp: 2017-10-23 16:44:38; Journal ID: ISSN 2169-9313
Publisher:
Wiley Blackwell (John Wiley & Sons)
Country of Publication:
United States
Language:
English

Citation Formats

Király-Proag, Eszter, Zechar, J. Douglas, Gischig, Valentin, Wiemer, Stefan, Karvounis, Dimitrios, and Doetsch, Joseph. Validating induced seismicity forecast models-Induced Seismicity Test Bench: INDUCED SEISMICITY TEST BENCH. United States: N. p., 2016. Web. doi:10.1002/2016JB013236.
Király-Proag, Eszter, Zechar, J. Douglas, Gischig, Valentin, Wiemer, Stefan, Karvounis, Dimitrios, & Doetsch, Joseph. Validating induced seismicity forecast models-Induced Seismicity Test Bench: INDUCED SEISMICITY TEST BENCH. United States. doi:10.1002/2016JB013236.
Király-Proag, Eszter, Zechar, J. Douglas, Gischig, Valentin, Wiemer, Stefan, Karvounis, Dimitrios, and Doetsch, Joseph. Tue . "Validating induced seismicity forecast models-Induced Seismicity Test Bench: INDUCED SEISMICITY TEST BENCH". United States. doi:10.1002/2016JB013236.
@article{osti_1402133,
title = {Validating induced seismicity forecast models-Induced Seismicity Test Bench: INDUCED SEISMICITY TEST BENCH},
author = {Király-Proag, Eszter and Zechar, J. Douglas and Gischig, Valentin and Wiemer, Stefan and Karvounis, Dimitrios and Doetsch, Joseph},
abstractNote = {},
doi = {10.1002/2016JB013236},
journal = {Journal of Geophysical Research. Solid Earth},
number = 8,
volume = 121,
place = {United States},
year = {Tue Aug 30 00:00:00 EDT 2016},
month = {Tue Aug 30 00:00:00 EDT 2016}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record at 10.1002/2016JB013236

Citation Metrics:
Cited by: 5works
Citation information provided by
Web of Science

Save / Share:
  • Models are an essential component of any assessment of ecosystem response to changes in global climate and elevated atmospheric carbon dioxide concentration. The problem with these models is that their long-term predictions are impossible to test unambiguously except by allowing enough time for the full ecosystem response to develop. Unfortunately, when one must assess potentially devastating changes in the global environment, time becomes a luxury. Therefore, confidence in these models has to be built through the accumulation of fairly weak corrobatin evidence rather than through a few crucial and unambiguous tests. The criteria employed to judge the value of thesemore » models are thus likely to differ greatly from those used to judge finer scale models, which are more amenable to the scientific tradition of hypothesis formulation and testing. This article looks at four categories of tests which could potentially be used to evaluate ERCC (ecosystem response to climate and carbon dioxide concentration) models and illustrates why they cannot be considered crucial tests. The the synthesis role of ERCC models are is discussed and why they are vital to any assessment of long-term responses of ecosystems to changes in global climate and carbon dioxide concentration. 49 refs., 2 figs.« less
  • In the 1970's the Energy Information Administration initiated a program calling for review and evaluation of its data and validation of energy models used in support of policy making processes, including the large-scale energy models developed under DOE auspices. This paper proposes and illustrates an alternative approach to validation, the use of experimental methods. An allocation function is defined, some of its uses in energy modelling are described, and three methods are discussed for allocating energy aggregates to be evaluated in experiments. The structure of the experimental framework is detailed, and the features of the experimental design are reviewed. Resultsmore » are reported and a summary is given of the conclusions. 24 references, 4 tables.« less
  • Evaluated criticality benchmark data obtained at the Static Criticality Experiment Facility (STACY) account for a large percentage of low-enriched uranium (LEU) solution systems documented in the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. These data are available for validation of computer codes and nuclear data used for criticality safety analyses of LEU solution systems. The calculated k{sub eff}'s for the water-reflected STACY criticality experiments have been overestimated with JENDL-3.2 by {approx}0.7%. These overestimations were kept in mind while making modifications of the fission spectrum and the fission cross section of {sup 235}U, and the (n,p) cross section of {supmore » 14}N in JENDL-3.3. Because of these modifications, the k{sub eff}'s calculated with JENDL-3.3 were largely improved. The contributions of these modifications in JENDL-3.3 with respect to JENDL-3.2 and ENDF/B-VI.5 were investigated by performing perturbation calculations. The overestimation of the elastic-scattering cross section of {sup 56}Fe in the mega-electron-volt range was one of the reasons for the k{sub eff} overestimations for the STACY experiments with JENDL-3.2. The modification of {sup 56}Fe cross sections in JENDL-3.3 reduces k{sub eff}'s in the STACY experiments by 0.2%. The dependence of calculated k{sub eff}'s on uranium concentration still exists in JENDL-3.3. The overestimation of calculated k{sub eff}'s for the STACY experiments with JENDL-3.3 is not insignificant and is as much as 0.6%. These problems are to be resolved in a future evaluation of the cross-section library.« less