skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

Abstract

Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

Authors:
; ;
Publication Date:
Research Org.:
Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
Sponsoring Org.:
Earth Sciences Division
OSTI Identifier:
1016011
Report Number(s):
LBNL-4538E
TRN: US201112%%377
DOE Contract Number:
DE-AC02-05CH11231
Resource Type:
Technical Report
Resource Relation:
Related Information: Journal Publication Date: 2011
Country of Publication:
United States
Language:
English
Subject:
54; 58; CAP ROCK; CARBON; CARBON DIOXIDE; CLIMATES; CLIMATIC CHANGE; GEOLOGIC STRATA; MITIGATION; PLUMES; POWER GENERATION; PROBABILITY; SIMULATION; SITE SELECTION; STATISTICS; STORAGE

Citation Formats

Jordan, P.D., Oldenburg, C.M., and Nicot, J.-P. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation. United States: N. p., 2011. Web. doi:10.2172/1016011.
Jordan, P.D., Oldenburg, C.M., & Nicot, J.-P. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation. United States. doi:10.2172/1016011.
Jordan, P.D., Oldenburg, C.M., and Nicot, J.-P. Sun . "Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation". United States. doi:10.2172/1016011. https://www.osti.gov/servlets/purl/1016011.
@article{osti_1016011,
title = {Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation},
author = {Jordan, P.D. and Oldenburg, C.M. and Nicot, J.-P.},
abstractNote = {Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.},
doi = {10.2172/1016011},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Sun May 15 00:00:00 EDT 2011},
month = {Sun May 15 00:00:00 EDT 2011}
}

Technical Report:

Save / Share:
  • The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing themore » false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB implementations to compute the decision threshold r{sub 0}*that will provide the appropriate desired Probability of False Alarm P{sub FA} for the matched filter. The goal is to use prior knowledge of the background data to generate an estimate of the probability density function (pdf) [13] of the matched filter threshold r for the case in which the data measurement contains only background data (we call this case the null hypothesis, or H{sub 0}) [10, 11]. We call the pdf estimate {cflx f}(r|H{sub 0}). In this report, we use histograms and Parzen pdf estimators [14, 15, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. Once the estimate is obtained, it can be integrated to compute an estimate of the P{sub FA}as a function of the matched filter detection threshold r. We can then interpolate r vs. P{sub FA} to obtain a curve that gives the threshold r{sub 0}* that will provide the appropriate desired Probability of False Alarm P{sub FA}for the matched filter. Processing results have been computed using both simulated and real LASI data sets. The algorithms and codes have been validated, and the results using LASI data are presented here. Future work includes applying the pdf estimation and CFAR threshold calculation algorithms to the LASI matched filter based upon global background statistics, and developing a new adaptive matched filter algorithm based upon local background statistics. Another goal is to implement the 4-Gamma pdf modeling method proposed by Stocker et. al. [4] and comparing results using histograms and the Parzen pdf estimators.« less
  • This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For eachmore » database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.« less
  • Model building in air pollution research is often complicated by various kinds of correlations. However, the period which most concerns us is the time when the concentration is high. The probability model presented here is intended to describe the behavior of high pollutant concentrations. Data are from San Jose air monitoring. (PCS)