Control and Dynamical Systems Caltech Control and Dynamical Systems
Research  |  Technical Reports  |  Seminars  |  Conferences & Workshops  |  Related Events

Nonlinear Optimal Control: A Receding Horizon Approach

Jim Primbs - Thesis Defense, Control and Dynamical Systems, Caltech

Thursday, January 7, 1999
3:00 PM to 4:00 PM
Steele 125

As advances in computing power forge ahead at an unparalleled rate, an increasingly compelling question that spans nearly every discipline is how best to exploit these advances. At one extreme, a tempting approach is to throw as much computational power at a problem as possible. Unfortunately, this is rarely a justifiable approach unless one has some theoretical guarantee of the efficacy of the computations. At the other extreme, not taking advantage of available computing power is unnecessarily limiting. In general, it is only through a careful inspection of the strengths and weakness of all available approaches that an optimal balance between analysis and computation is achieved. This thesis addresses the delicate interaction between theory and computation in the context of optimal control.

An exact solution to the nonlinear optimal control problem is known to be prohibitively difficult, both analytically and computationally. Nevertheless, a number of alternative (suboptimal) approaches have been developed. Many of these techniques approach the problem from an off-line, analytical point of view, designing a controller based on a detailed analysis of the system dynamics. A concept particularly amenable to this point of view is that of a control Lyapunov function. These techniques extend the Lyapunov methodology to control systems. In contrast, so-called receding horizon techniques rely purely on on-line computation to determine a control law. While offering an alternative method of attacking the optimal control problem, receding horizon implementations often lack solid theoretical stability guarantees. In this thesis, we uncover a synergistic relationship that holds between control Lyapunov function based schemes and on-line receding horizon style computation. These connections derive from the classical Hamilton-Jacobi-Bellman and Euler-Lagrange approaches to optimal control. By returning to these roots, a broad class of control Lyapunov schemes are shown to admit natural extensions to receding horizon schemes, benefiting from the performance advantages of on-line computation. From the receding horizon point of view, the use of a control Lyapunov function is a convenient solution to not only the theoretical properties that receding horizon control typically lacks, but also unexpectedly eases many of the difficult implementation requirements associated with on-line computation. After developing these schemes for the unconstrained nonlinear optimal control problem, the entire design methodology is illustrated on the planar model of a ducted fan. They are then extended to time-varying and input constrained nonlinear systems, offering a promising new paradigm for nonlinear optimal control design.

©2003-2011 California Institute of Technology. All Rights Reserved
webmastercdscaltechedu