Solving the Hamilton Jacobi Bellman equations of optimal control and state-estimation: towards taming the curse of dimensionality
Karl Kunisch, Graz University and Radon Institute, Austria
Optimal feedback control and state-estimation of nonlinear systems
depend on the solution of high-dimensional Hamilton Jacobi Bellman equations. Soving such problems in practice represents represents
a significant challenge.
In the first part we survey three approaches, policy iteration, a data driven technique, and the averaged feedback learning scheme (AFLS), commenting on practical perfomance and analytical results which can be achieved. While these results are mainly justifiable for the case of sufficiently regular value functions, in the second part we concentrate
on value functions which are only semiconcave. For this case
again an algorithmic framework, based on the representation of semiconcave functions as minimum of $C^2$ regular functions, is provided.
This is joint work with B. Azmi, S. Dolgov, D. Kalise, D. Vasquez, and D. Walter.