Iterative Approximation of Optimal Control for a Class of Nonlinear Systems


Proceedings of the 15th Latinamerican Control Conference, 2012
Author(s):Stanger T., Passenbrunner T., Del Re L.
Year:2012
Abstract:
For nonlinear systems the optimal control law is given by the solution of the Hamilton Jacobi Bellman Equation which can not be solved in a general way. The method proposed in this paper obtains a solution by successive approximation due to the solution of the Generalized Hamilton Jacobi Bellman Equation. Successive improvement of the control law leads to an approximation of the optimal control, which is optimal in a bounded region around the origin. Application of Policy Iteration to an example, an instable, nonlinear, inverse pendulum will demonstrate the capabilities of the whole approach. Two different implementations of Policy Iteration have been applied to this example. One uses simulation to approximate the solution of the Generalized Hamilton Jacobi Bellman Equation and the other is based on a numerical solution. While the first realization requires is computational expensive, but require only little theoretical knowledge, the second one is much faster. The improvement of the control law, in terms of the cost function, gained by this approach is up to 30% with respect to the LQR.
 
Sign in
Aucun animal n'a été blessé lors la conception de ce site web