|Date: Wednesday, October 14, 2015
Location: 1360 East Hall (4:00 PM to 5:00 PM)
Title: Randomization method for optimal control of partially observed path-dependent SDEs
Abstract: In the present talk we introduce a general methodology, which we refer to as the randomization method, firstly developed for classical Markovian control problem in the paper: I. Kharroubi and H. Pham "Feynman-Kac representation for Hamilton-Jacobi-Bellman IPDE", Ann. Probab., 2015. As it is well-known, the dynamic programming method is the standard methodology implemented for the study of classical Markovian control problems, which allows to relate the value function to the Hamilton-Jacobi-Bellman equation through the so-called dynamic programming principle. The key feature of the dynamic programming method is that the knowledge of the value function allows, at least in principle, to find an optimal control for the problem. Alternatively, the Pontryagin maximum principle provides a set of necessary or sufficient conditions in terms of a system of adjoint backward stochastic differential equations for an optimal control. These very powerful and well-known methodologies break down (in the sense that they can not be directly implemented in a standard way) when we face control problems which present the following additional features: partial observation, path-dependence, delay in the control. On the other hand, the randomization method can be quite easily generalized and adapted to these more general control problems. The aim of the talk is to illustrate this latter point, starting with the presentation of the fundamental ideas of the randomization method.
The talk is based on a joint work in progress with E. Bandini, M. Fuhrman, H. Pham.
Speaker: Andrea Cosso
Institution: Paris 7 (Diderot), LPMA
Event Organizer: Erhan Bayraktar email@example.com