skip to main content

SciTech ConnectSciTech Connect

Title: Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.
Authors:
 [1] ;  [2] ;  [3]
  1. Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France)
  2. Projet Commands, INRIA Saclay & ENSTA ParisTech (France)
  3. Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)
Publication Date:
OSTI Identifier:
22470060
Resource Type:
Journal Article
Resource Relation:
Journal Name: Applied Mathematics and Optimization; Journal Volume: 71; Journal Issue: 1; Other Information: Copyright (c) 2015 Springer Science+Business Media New York; http://www.springer-ny.com; Country of input: International Atomic Energy Agency (IAEA)
Country of Publication:
United States
Language:
English
Subject:
71 CLASSICAL AND QUANTUM MECHANICS, GENERAL PHYSICS; 97 MATHEMATICAL METHODS AND COMPUTING; BOUNDARY CONDITIONS; COMPUTERIZED SIMULATION; CONVERGENCE; DIFFERENTIAL EQUATIONS; DYNAMIC PROGRAMMING; LAGRANGIAN FUNCTION; MATHEMATICAL SOLUTIONS; OPTIMAL CONTROL; STOCHASTIC PROCESSES; VISCOSITY