IFIP TC 7 / 2013 - Minisymposia



  • Varvara Turova, TU München
  • Nikolai Botkin, TU München
Dynamic Programming Approach to Optimal Control Methods and Applications

Dynamic programming method proposed by R. Bellman at the end of the 1950s became a powerful tool of modern control theory. It is applicable to problems with fixed and non-fixed time of termination, to processes with infinite time horizon, as well as to problems with various boundary conditions and state constraints.

Dynamic programming method is also suitable for solving optimal control problems in the presence of uncertainties, if the uncertain factors are interpreted as counteractions of an opponent who tries to do maximal harm, whereas the aim of the control is to ensure the best guaranteed result. In problems with continuous time, dynamic programming approach leads to Hamilton-Jacobi-Bellman (HJB) equations whose theory is being intensively developed.
At the same time, the development of effective numerical methods is of great importance for applications.

The section focuses on recent achievements both in the theory of viscosity solutions (existence, proximal analysis, regularity, etc.) of HJB equations and in approximation techniques that provide stable numerical
algorithms for the treatment of HJB equations arising from nonlinear control problems with state constraints.

Different approaches to overcoming the curse of dimensionality including parallel algorithms and the usage of modern computing technologies will be discussed.
Some important applications of dynamic programming method in different areas, e.g., in civil aviation, acoustics, and ecology will be presented.

>>>Program