Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Reinforcement Learning via Gaussian Processes with Neural Network Dual Kernels

Journal Article · · 2020 IEEE Conference on Games (CoG)
 [1];  [1];  [1]
  1. Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

While deep neural networks (DNNs) and Gaussian Processes (GPs) are both popularly utilized to solve problems in reinforcement learning, both approaches feature undesirable drawbacks for challenging problems. DNNs learn complex non-linear embeddings, but do not naturally quantify uncertainty and are often data-inefficient to train. GPs infer posterior distributions over functions, but popular kernels exhibit limited expressivity on complex and high-dimensional data. Fortunately, recently discovered conjugate and neural tangent kernel functions encode the behavior of overparameterized neural networks in the kernel domain. We demonstrate that these kernels can be efficiently applied to regression and reinforcement learning problems by analyzing a baseline case study.We apply GPs with neural network dual kernels to solve reinforcement learning tasks for the first time. We demonstrate, using the well understood mountain-car problem, that GPs empowered with dual kernels perform at least as well as those using the conventional radial basis function kernel. Finally, we conjecture that by inheriting the probabilistic rigor of GPs and the powerful embedding properties of DNNs, GPs using NN dual kernels will empower future reinforcement learning models on difficult domains.

Research Organization:
Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)
Sponsoring Organization:
USDOE National Nuclear Security Administration (NNSA)
Grant/Contract Number:
AC52-07NA27344
OSTI ID:
1780581
Report Number(s):
LLNL-JRNL--808440; 1014384
Journal Information:
2020 IEEE Conference on Games (CoG), Journal Name: 2020 IEEE Conference on Games (CoG) Vol. 2020; ISSN 2325-4289
Publisher:
IEEECopyright Statement
Country of Publication:
United States
Language:
English

References (14)

Local Gaussian process regression for real-time model-based robot control conference September 2008
Priors for Infinite Networks book January 1996
Comprehensive comparison of online ADP algorithms for continuous-time optimal control journal February 2017
Gaussian process dynamic programming journal March 2009
Human-level control through deep reinforcement learning journal February 2015
Learning about physical parameters: the importance of model discrepancy journal October 2014
Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning conference May 2018
GP-UKF: Unscented kalman filters with Gaussian process prediction and observation models conference October 2007
Local Gaussian process regression for real-time model-based robot control conference September 2008
Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups journal November 2012
Gaussian Processes for Data-Efficient Learning in Robotics and Control journal February 2015
Learning to Schedule Control Fragments for Physics-Based Characters Using Deep Q-Learning journal June 2017
DeepMimic journal July 2018
Nonlinear Adaptive Control Using Nonparametric Gaussian Process Prior Models journal January 2002

Similar Records

Representation Learning via Quantum Neural Tangent Kernels
Journal Article · Wed Aug 17 00:00:00 EDT 2022 · PRX Quantum · OSTI ID:1982853

Correspondence of NNGP Kernel and the Matérn Kernel
Technical Report · Mon Oct 04 00:00:00 EDT 2021 · OSTI ID:2461672

Optimizing thermodynamic trajectories using evolutionary reinforcement learning
Journal Article · Wed Mar 20 00:00:00 EDT 2019 · arXiv.org Repository · OSTI ID:1601197