MATH 626- Deterministic and Stochastic Optimal Control

This course will be an introduction to optimal control theory. The goal is to develop the tools that are important to understanding control theory problems. Three examples illustrate the range of problems:
(a) A rocket has engines which can be adjusted in flight. Find the settings of the engines so that the rocket gets to its target in minimum time.
(b) A portfolio consists of a variable combination of cash and stocks. Amounts are withdrawn from the portfolio at a variable rate for consumption. Find the optimal distribution of the portfolio between cash and stocks to maximise the expected utility of consumption.
(c) A company must transport a given product from m origins to n destinations. Given the amount of product at each origin and the amount required at each destination, find the minimum cost of doing the transportation.

Control problems always involve minimising a "cost function" as a function of the adjustable parameters= "controls" in the problem. In problem (a) the cost function is the time to target. Problem (a) is an example of deterministic control theory. Problem (b) is the Merton portfolio optimisation problem, an example of stochastic control theory. Problem (c) is the Monge-Kantorovich mass transportation problem, an example in linear programming. The mathematics involved with the three problems are closely related.

The first part of the course explores the relationship between deterministic control problems and Hamiltonian mechanics. Central to this is the result that the cost function satisfies a first order partial differential equation, the Hamilton-Jacobi equation, known as the Bellman equation in control theory.

The second part of the course considers problems of stochastic control theory. The cost function is now an expectation value of a functional of the dynamical variables. It satisfies a second order parabolic or elliptic partial differential equation which can be fully nonlinear. We study two examples of these, the Burgers' equation and the Monge-Ampere equation. We also explore the connection between control theory and prediction theory. In prediction theory one tries to predict the value of a random variable from observations of other random variables correlated to the variable of interest. We show that an exactly solvable problem in prediction theory, the Kalman filter, is equivalent to a control problem with linear dynamics and quadratic cost function.

In the final part of the course we will be concerned with problem (c), the Monge-Kantorovich mass transfer problem. The original transport problem was proposed by Monge in the 1780's. The solution by Kantorovich and Koopmans was awarded the 1975 Nobel prize in economics. We shall show how the problem is solved by going to the dual linear program. The continuous version of the dual problem gives a solution of the Monge-Ampere equation, which has already occurred earlier in the course.

• Prequisite: Some knowledge of differential equations and probability theory ( at the 500 level).
• Grading: Grades will be based on performance in the homework sets.
• Text: "Deterministic and Stochastic Optimal Control" by W. Fleming and R. Rishel, Springer 1975 ( reprinted 1999).