Difference between revisions of "EECI 2020: Probabilistic Systems"
(→Further Reading) 
(→Lecture Materials) 

Line 5:  Line 5:  
== Lecture Materials ==  == Lecture Materials ==  
−  * [  +  * [ [http://www.cds.caltech.edu/~murray/courses/eecisp2020//L5_probabilistic10Mar2020.pdf Lecture slides] (Presentation and notation follow that in "Principles of Model Checking" chapter 10 by Baier and Katoen.) 
== Further Reading ==  == Further Reading == 
Revision as of 07:57, 10 March 2020
Prev: Model Checking  Course home  Next: Computer Session: Stormpy 
This lecture provides an introduction to probabilistic model checking. We start with Markov chains as a mathematical model to describe behavior of probabilistic systems where a successor of each state is chosen according to a probability distribution. Then, we discuss basic concepts of probability theory necessary to reason about the quantitative properties of Markov chains. We then move to quantitative analysis of systems modeled by Markov chains, including reachability, regular safety and omegaregular properties. Finally, we introduce Markov decision processes (MDPs), a mathematical model that permits both probabilistic and nondeterministic choices and discuss policy synthesis for MDPs with LTL specifications.
Lecture Materials
 [ Lecture slides (Presentation and notation follow that in "Principles of Model Checking" chapter 10 by Baier and Katoen.)
Further Reading

Principles of Model Checking, C. Baier and J.P. Katoen, The MIT Press, 2008. A detailed reference on model checking. Slides for this lecture follow Chapter 10 of this reference.

Incremental Synthesis of Control Policies for Heterogeneous MultiAgent Systems with Linear Temporal Logic Specifications, T. Wongpiromsarn, A. Ulusoy, C. Belta, E. Frazzoli and D. Rus, ICRA 2013. An incremental version of probabilistic synthesis, with autonomous driving examples.

Control of Probabilistic Systems under Dynamic, Partially Known Environments with Temporal Logic Specifications, T. Wongpiromsarn and E. Frazzoli, CDC 2012. Another example, with not fully observable agents.