CDS 110b: Optimal Control

From MurrayWiki
Revision as of 01:04, 19 January 2008 by Braman (Talk | contribs)

Jump to: navigation, search
CDS 110b Schedule Project Course Text

This lecture provides an overview of optimal control theory. Beginning with a review of optimization, we introduce the notion of Lagrange multipliers and provide a summary of the Pontryagin's maximum principle.

References and Further Reading

Frequently Asked Questions

Q: In problem 2.4(d) of the homework, to what positive value should the parameter b be set?

Use b = 1 for part d when solving for and comparing the two trajectories found symbolically in previous parts.

Julia Braman, 18 Jan 08

Q: In the example on Bang-Bang control discussed in the lecture, how is the control law for u obtained?

Pontryagin's Maximum Principle says that u has to be chosen to minimise the Hamiltonian H(x,u,\lambda ) for given values of x and \lambda . In the example, H=1+({\lambda }^{T}A)x+({\lambda }^{T}B)u. At first glance, it seems that the more negative u is the more H will be minimised. And since the most negative value of u allowed is -1, u=-1. However, the co-efficient of u may be of either sign. Therefore, the sign of u has to be chosen such that the sign of the term ({\lambda }^{T}B)u is negative. That's how we come up with u=-sign({\lambda }^{T}B).

Shaunak Sen, 12 Jan 06

Q: Notation question for you: In the Lecture notes from Wednesday, I'm assuming that T is the final time and T (superscript T) is a transpose operation. Am I correct in my assumption?

Yes, you are correct.

Jeremy Gillula, 07 Jan 05