CDS 110b: Linear Quadratic Regulators

From MurrayWiki
Jump to: navigation, search
CDS 110b Schedule Project

This lecture provides a brief derivation of the linear quadratic regulator (LQR) and describes how to design an LQR-based compensator. The use of integral feedback to eliminate steady state error is also described.

References and Further Reading

  • R. M. Murray, Optimization-Based Control. Preprint, 2008: Chapter 2 - Optimal Control
  • Lewis and Syrmos, Section 3.4 - this follows the derivation in the notes above. I am not putting in a scan of this chapter since the course text is available, but you are free to have a look via Google Books.
  • Friedland, Ch 9 - the derivation of the LQR controller is done differently, so it gives an alternate approach.

Frequently Asked Questions

Q: What do you mean by penalizing something, from Q_{x}\geq 0 "penalizes" state error?

According to the form of the quadratic cost function J, there are three quadratic terms such as x^{T}Q_{x}x, u^{T}Q_{u}u, and x(T)^{T}P_{1}x(T). When Q_{x}\geq 0 and if Q_{x} is relative big, the value of x will have bigger contribution to the value of J. In order to keep J small, x must be relatively small. So selecting a big Q_{x} can keep x in small value regions. This is what the "penalizing" means.

So in the optimal control design, the relative values of Q_{x}, Q_{u}, and P_{1} represent how important X, U, and X(T) are in the designer's concerns.

Zhipu Jin,13 Jan 03