skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Reinforcement learning based automated history matching for improved hydrocarbon production forecast

Journal Article · · Applied Energy
 [1];  [2]
  1. Univ. of Oklahoma, Norman, OK (United States)
  2. Texas A & M Univ., College Station, TX (United States)

History matching aims to find a numerical reservoir model that can be used to predict the reservoir performance. An engineer and model calibration (data inversion) method are required to adjust various parameters/properties of the numerical model in order to match the reservoir production history. In this study, we develop deep neural networks within the reinforcement learning framework to achieve automated history matching that will reduce engineers’ efforts, human bias, automatically and intelligently explore the parameter space, and remove the need of large set of labeled training data. To that end, a fast-marching-based reservoir simulator is encapsulated as an environment for the proposed reinforcement learning. The deep neural-network-based learning agent interacts with the reservoir simulator within reinforcement learning framework to achieve the automated history matching. Reinforcement learning techniques, such as discrete Deep Q Network and continuous Deep Deterministic Policy Gradients, are used toth, used to train the learning agents. The continuous actions enable the Deep Deterministic Policy Gradients to explore more states at each iteration in a a learning episode; consequently, a better history matching is achieved using this algorithm as compared to Deep Q Network. For simplified dual-target composite reservoir models, the best history-matching performances of the discrete and continuous learning methods in terms of normalized root mean square errors are 0.0447 and 0.0038, respectively. Furthermore, our study shows that continuous action space achieved by the deep deterministic policy gradient drastically outperforms deep Q network.

Research Organization:
Texas A & M Univ., College Station, TX (United States). Texas A & M Engineering Experiment Station
Sponsoring Organization:
USDOE Office of Science (SC), Basic Energy Sciences (BES). Chemical Sciences, Geosciences & Biosciences Division; USDOE Office of Science (SC), Basic Energy Sciences (BES)
Grant/Contract Number:
SC0020675
OSTI ID:
1853666
Alternate ID(s):
OSTI ID: 1809546
Journal Information:
Applied Energy, Vol. 284, Issue C; ISSN 0306-2619
Publisher:
ElsevierCopyright Statement
Country of Publication:
United States
Language:
English

References (11)

Bayesian Style History Matching: Another Way to Under-Estimate Forecast Uncertainty? conference September 2015
Hybrid Parameterization for Robust History Matching journal November 2013
An improved TSVD-based Levenberg–Marquardt algorithm for history matching and comparison with Gauss–Newton journal July 2016
Optimizing Automatic History Matching for Field Application Using Genetic Algorithm and Particle Swarm Optimization conference March 2018
Determining the levels and parameters of thief zone based on automatic history matching and fuzzy method journal February 2016
Technology Focus: History Matching and Forecasting (April 2017) journal April 2017
Schemes for automatic history matching of reservoir modeling: A case of Nelson oilfield in UK journal June 2012
A Comparative Study of Proxy Modeling Techniques in Assisted History Matching
  • Shams, Mohamed; El-Banbi, Ahmed. H.; Sayyouh, Helmy
  • SPE Kingdom of Saudi Arabia Annual Technical Symposium and Exhibition, Day 3 Wed, April 26, 2017 https://doi.org/10.2118/188056-MS
conference April 2017
MultiStencils Fast Marching Methods: A Highly Accurate Solution to the Eikonal Equation on Cartesian Domains journal September 2007
Human-level control through deep reinforcement learning journal February 2015
Mastering the game of Go with deep neural networks and tree search journal January 2016