# Difference between revisions of "Workshop: Specification and Verification of Embedded Systems"

(data had wrong year) |
|||

(16 intermediate revisions by 3 users not shown) | |||

Line 1: | Line 1: | ||

== Aim == | == Aim == | ||

− | These notes aim at refreshing our memory on the discussion we had on verification on | + | These notes aim at refreshing our memory on the discussion we had on verification on 28 October 2008 at Caltech and initializing a discussion on Richard's question/suggestion how model predictive control type ideas may play a role in constructing FSA from specifications and then translating to hybrid controllers. |

− | == | + | == Summary and Discussion == |

− | Automatically generating a finite state machine from LTL | + | Automatically generating a finite state machine from LTL properties provides a way to construct a supervisory controller that is correct by construction. To describe how this works, we distinguish between system specifications (all the possible behaviors of the system) and requirements (the valid/desirable/correct behaviors of the system).{{comment|hadas|We distinguish between two types of "entities"; the ''environment'' representing the sensor information and the ''system'' representing the aspects of the system we can control. For example, if the task is "search the five rooms and if you see Mika stop" the sensor information "mika has been seen" belongs to the environment while the region the robot is in belong to the system. Each of these entities can have restriction on their behavior (where by behavior I mean the allowed changes to the truth values of the propositions). When these restrictions relate to the environment we view them as assumptions on the behavior of the environment, for example we may assume "Mika is never in Room 1". Restrictions on the behavor of the system are viewed as desired behavior (both safety and liveness requirements)}} |

− | Receding horizon control is an effective approach to deal with large constrained | + | * Using the system (and environment) specifications, construct a finite state model which represents all the possible state transitions of the system. For example, in robot motion planning, we discretize the space into regions. These regions are represented by the nodes of the finite state model. An edge from node A to node B means that from region A, the robot can go to region B, i.e. since a robot cannot jump, there is an edge from A to B if regions A and B are adjacent. {{comment|nok|Question: If from the system requirements, we know, for example, that we cannot go to region C (say, it is occupied by an obstacle), can we just remove all the edges to C (or even remove the node C) at this stage, so we can speed up the search we need to do in the next step? It may not matter if we extract an automaton offline but if we want to do it online, incorporating requirements in this stage may save computation time a bit.}} |

− | control problems. The main idea is to choose the control action by repeatedly | + | {{comment|hadas|There isn't really a need to construct a finite state model of the state transitions. In the case of a partitioned environment it is easy to do and that is what I actually do but what is really needed are constraints on the change in truth value of the propositions. The finite state model of the workspace imposes the constraints that if proposition "room 1" is true now, proposition "room 17" can only be true in the next state if the rooms are adjacent. In other cases, for example an intersection automaton, you can specify a set of constraints that have to be met and as long as they are consistent, the FSA will emerge (no need to have a consistent model in mind). In general the way to view it is the robot and the environment can arbitrarily change the truth values of their propositions (can do anything) unless you restrict them. |

− | solving on line an optimal control problem over a finite horizon with certain cost | + | |

− | associated with the end state. That is, we do not need to solve the whole sequence | + | Nok, regarding your question: you could either remove the subformulas relating to C OR you could add a formula that says "never C". Since the LTL game is solved using a fixed point calculation, all states containing C will be removed in the first iteration and the computation will not be slowed. The more restrictions the faster the calculation, the more propositions introduced the larger the state space and computation time.}} |

− | of the control actions from the beginning to the end of the execution. Instead, we | + | |

− | only solve the optimization problem over a short period of time and use the cost | + | {{comment|ufuk| ''I am confused about restrictions and propositions. Could you please give definitions of these and also simple examples?''}} |

− | associated with the end state of the horizon to guarantee | + | |

− | system | + | {{comment|hadas| A simple example would illustrate it I think: <pre> |

− | to the | + | Scenario: Workspace contains 4 rooms [1|2|3|4] in a line. Robot can sense people and can sound an alarm. |

− | we do not really care about what exactly may happen more than 500 m ahead of | + | Task is to search all the rooms repeatedly and sound the alarm whenever it sees a person. |

− | the vehicle, we only need to know that from each of those points, we can get to the desired end state | + | (This is a very simple example where it is not clear you even need to use LTL, but it is easy to follow I hope) |

− | with some approximated cost. This allows us to reduce the number of states that | + | |

− | need to be considered while extracting an automaton which satisfies the LTL specifications. | + | Propositions: Boolean propositions divided up into two sets, environment (or input) and robot (or output) |

+ | environment proposition: 'See_person' (True if the robot is sensing a person and false otherwise) | ||

+ | robot propositions: 'Room1','Room2','Room3','Room4','Alarm'. Proposition Roomi is true iff the robot is in room i. | ||

+ | Alarm is true iff the robot is sounding its alarm.</pre> | ||

+ | |||

+ | The propositions capture the information in the system. The environment propositions capture the high level information the robot can sense while the robot proposition represent the actions the robot can do. If I get a plan in which in the current state ''Room1'' is true and in the next state ''Room2'' is true, then the robot needs to move from room 1 to room 2. If in the current state ''Alarm'' is true, the robot must sound the alarm. | ||

+ | |||

+ | The specifications are given as LTL formulas (with a specific structure I will not get into right now) - they include the liveness portions of the task (things that eventually have to happen, for example all rooms must be reached) as well as the safety (things that must always hold, for example the alarm must always be sounded when a person is seen and from room 1 you can only go to room 2, etc...). The way I see it is that the safety requirements "restrict" the possible behaviors of the robot. To expand on that lets look at the topology of the workspace. If the LTL formula is only <>Room1 ^ <>Room4 (eventually room 1 and eventually room 4) then I might get a plan that moves the robot from room 1 directly to room 4. But that is not topologically feasible, so I need to add the restrictions on the motion of the robot: [](Room1 -> (ORoom1 | ORoom2))... (Always, when you are in room 1, you can only be in room 1 or room 2 in the next step). That is what I mean by restrictions. It is a bit different than traditional AI in that the robot can do anything (change the values of the robot propositions) unless restricted. | ||

+ | |||

+ | }} | ||

+ | |||

+ | |||

+ | * From the system requirements and the previously constructed finite state model {{comment|hadas|The LTL formula capturing the assumptions and requirements}}, use a graph search algorithm {{comment|hadas|It is not a traditional graph search since there is no explicit graph but rather BDDs and fixed point calculations}} similar to that used in model checkers to extract an automaton from the finite state model {{comment|hadas|BDD that represents the full winning strategy}} which satisfies the requirement. | ||

+ | |||

+ | A serious drawback of this technique is that it suffers from state explosion (the finite state model of the system can get quite large) and thus, in practice, is not applicable for a complex system. In addition, extracting an automaton which satisfies the LTL requirements may take a long time since all the possible behaviors from the beginning to the end of the execution need to be considered in detail {{comment|hadas|Extracting the automaton takes a long time but is not really neccessary - next up for me is to see how to extract next state from the bdd directly without extracting the full automaton. Having said all of that, yes, there is a state explotion problem}}. | ||

+ | |||

+ | Receding horizon control is an effective approach to deal with large constrained control problems. The main idea is to choose the control action by repeatedly solving on line an optimal control problem over a finite horizon with certain cost associated with the end state. That is, we do not need to solve the whole sequence of the control actions from the beginning to the end of the execution. Instead, we only solve the optimization problem over a short period of time and use the cost associated with the end state of the horizon to guarantee the stability of the system. We may be able to apply similar idea to the construction of finite state machine from LTL requirements. For example, we do not really care about what exactly may happen more than 500 m ahead of the vehicle, we only need to know that from each of those points, we can get to the desired end state (or the sequence of desired states, in case we need to visit several checkpoints) with some approximated cost. This allows us to reduce the number of states that need to be considered while extracting an automaton which satisfies the LTL specifications. Using the idea from online/offline MPC, one thing we can do is to build a coarse finite state model offline and keep refining some portion of it online as the vehicle moves. In this case, extracting an automaton to be executed needs to be done online so we need to make sure that this process is fast enough.{{comment|hadas|This is a good idea to deal with the complexity, but we need to see whether we lose some of the guarantees}} | ||

+ | |||

+ | * Possible uses of MPC type ideas | ||

+ | ** More complicated dynamics beyond <math>\dot{x} = u</math> | ||

+ | ** The properties of the MPC solutions (how much online computation, margins in the solutions, feasibility, etc) may guide the partitioning of the space (relates to Nok's 500m examples). | ||

==Possible questions/ideas == | ==Possible questions/ideas == | ||

− | * It is said that ''Synthesis'' constructs FSA as the solution of a two-player game. Can this game be expressed as a min-max optimization problem? If so, starting from that interpretation, can the problem be simplified/restructured for more efficient/smaller FSA construction? | + | * It is said that ''Synthesis'' constructs FSA as the solution of a two-player game. Can this game be expressed as a min-max optimization problem? If so, starting from that interpretation, can the problem be simplified/restructured for more efficient/smaller FSA construction?{{comment|hadas|Question: what is the benefit of expressing the game as a min max optimization problem? The current construction of the full strategy (the full BDD) does not consider explicit states but rather groups expressed as BDDs}} |

+ | * Is it possible to construct the FSA partially? {{comment|hadas|Again, my concern is the guarantee on the global behavior - need to think about what is lost.}} | ||

+ | **Drive the construction with some objective. Imposing some kind of optimality may reduce the number of states to be visited. | ||

+ | *Varying detail construction: Finer requirements/partitioning close to the initial point and reduced details away from the initial point (just enough detail to ``guarantee" some ``safety" properties). Nok's 500m examples motivates this. | ||

+ | * How to reason about the correctness of the composition of two FSA which are correct-by-design? Conversely, given the high level system specification/requirement, how to come up with the requirements for each mode? For example, given the high level requirements for Alice (go from one checkpoint to another one, avoid obstacles, etc) how to come up with the requirements for the intersection, road, and zone modes and the transitions between these modes? | ||

+ | {{comment|hadas|I wouldn't call it a mode but the fundamental question is a hard and important one - how do we automatically partition a task into (disjoint?) subtasks such that the composition of the resulting small FSAs satisfy the original large task}} |

## Latest revision as of 15:33, 24 September 2009

## Aim

These notes aim at refreshing our memory on the discussion we had on verification on 28 October 2008 at Caltech and initializing a discussion on Richard's question/suggestion how model predictive control type ideas may play a role in constructing FSA from specifications and then translating to hybrid controllers.

## Summary and Discussion

Automatically generating a finite state machine from LTL properties provides a way to construct a supervisory controller that is correct by construction. To describe how this works, we distinguish between system specifications (all the possible behaviors of the system) and requirements (the valid/desirable/correct behaviors of the system).[hadas: We distinguish between two types of "entities"; the *environment* representing the sensor information and the *system* representing the aspects of the system we can control. For example, if the task is "search the five rooms and if you see Mika stop" the sensor information "mika has been seen" belongs to the environment while the region the robot is in belong to the system. Each of these entities can have restriction on their behavior (where by behavior I mean the allowed changes to the truth values of the propositions). When these restrictions relate to the environment we view them as assumptions on the behavior of the environment, for example we may assume "Mika is never in Room 1". Restrictions on the behavor of the system are viewed as desired behavior (both safety and liveness requirements)]

- Using the system (and environment) specifications, construct a finite state model which represents all the possible state transitions of the system. For example, in robot motion planning, we discretize the space into regions. These regions are represented by the nodes of the finite state model. An edge from node A to node B means that from region A, the robot can go to region B, i.e. since a robot cannot jump, there is an edge from A to B if regions A and B are adjacent. [nok: Question: If from the system requirements, we know, for example, that we cannot go to region C (say, it is occupied by an obstacle), can we just remove all the edges to C (or even remove the node C) at this stage, so we can speed up the search we need to do in the next step? It may not matter if we extract an automaton offline but if we want to do it online, incorporating requirements in this stage may save computation time a bit.]

[hadas: There isn't really a need to construct a finite state model of the state transitions. In the case of a partitioned environment it is easy to do and that is what I actually do but what is really needed are constraints on the change in truth value of the propositions. The finite state model of the workspace imposes the constraints that if proposition "room 1" is true now, proposition "room 17" can only be true in the next state if the rooms are adjacent. In other cases, for example an intersection automaton, you can specify a set of constraints that have to be met and as long as they are consistent, the FSA will emerge (no need to have a consistent model in mind). In general the way to view it is the robot and the environment can arbitrarily change the truth values of their propositions (can do anything) unless you restrict them.

Nok, regarding your question: you could either remove the subformulas relating to C OR you could add a formula that says "never C". Since the LTL game is solved using a fixed point calculation, all states containing C will be removed in the first iteration and the computation will not be slowed. The more restrictions the faster the calculation, the more propositions introduced the larger the state space and computation time.]

[ufuk: *I am confused about restrictions and propositions. Could you please give definitions of these and also simple examples?*]

Scenario: Workspace contains 4 rooms [1|2|3|4] in a line. Robot can sense people and can sound an alarm. Task is to search all the rooms repeatedly and sound the alarm whenever it sees a person. (This is a very simple example where it is not clear you even need to use LTL, but it is easy to follow I hope) Propositions: Boolean propositions divided up into two sets, environment (or input) and robot (or output) environment proposition: 'See_person' (True if the robot is sensing a person and false otherwise) robot propositions: 'Room1','Room2','Room3','Room4','Alarm'. Proposition Roomi is true iff the robot is in room i. Alarm is true iff the robot is sounding its alarm.

The propositions capture the information in the system. The environment propositions capture the high level information the robot can sense while the robot proposition represent the actions the robot can do. If I get a plan in which in the current state *Room1* is true and in the next state *Room2* is true, then the robot needs to move from room 1 to room 2. If in the current state *Alarm* is true, the robot must sound the alarm.

The specifications are given as LTL formulas (with a specific structure I will not get into right now) - they include the liveness portions of the task (things that eventually have to happen, for example all rooms must be reached) as well as the safety (things that must always hold, for example the alarm must always be sounded when a person is seen and from room 1 you can only go to room 2, etc...). The way I see it is that the safety requirements "restrict" the possible behaviors of the robot. To expand on that lets look at the topology of the workspace. If the LTL formula is only <>Room1 ^ <>Room4 (eventually room 1 and eventually room 4) then I might get a plan that moves the robot from room 1 directly to room 4. But that is not topologically feasible, so I need to add the restrictions on the motion of the robot: [](Room1 -> (ORoom1 ]

- From the system requirements and the previously constructed finite state model [hadas: The LTL formula capturing the assumptions and requirements], use a graph search algorithm [hadas: It is not a traditional graph search since there is no explicit graph but rather BDDs and fixed point calculations] similar to that used in model checkers to extract an automaton from the finite state model [hadas: BDD that represents the full winning strategy] which satisfies the requirement.

A serious drawback of this technique is that it suffers from state explosion (the finite state model of the system can get quite large) and thus, in practice, is not applicable for a complex system. In addition, extracting an automaton which satisfies the LTL requirements may take a long time since all the possible behaviors from the beginning to the end of the execution need to be considered in detail [hadas: Extracting the automaton takes a long time but is not really neccessary - next up for me is to see how to extract next state from the bdd directly without extracting the full automaton. Having said all of that, yes, there is a state explotion problem].

Receding horizon control is an effective approach to deal with large constrained control problems. The main idea is to choose the control action by repeatedly solving on line an optimal control problem over a finite horizon with certain cost associated with the end state. That is, we do not need to solve the whole sequence of the control actions from the beginning to the end of the execution. Instead, we only solve the optimization problem over a short period of time and use the cost associated with the end state of the horizon to guarantee the stability of the system. We may be able to apply similar idea to the construction of finite state machine from LTL requirements. For example, we do not really care about what exactly may happen more than 500 m ahead of the vehicle, we only need to know that from each of those points, we can get to the desired end state (or the sequence of desired states, in case we need to visit several checkpoints) with some approximated cost. This allows us to reduce the number of states that need to be considered while extracting an automaton which satisfies the LTL specifications. Using the idea from online/offline MPC, one thing we can do is to build a coarse finite state model offline and keep refining some portion of it online as the vehicle moves. In this case, extracting an automaton to be executed needs to be done online so we need to make sure that this process is fast enough.[hadas: This is a good idea to deal with the complexity, but we need to see whether we lose some of the guarantees]

- Possible uses of MPC type ideas
- More complicated dynamics beyond
- The properties of the MPC solutions (how much online computation, margins in the solutions, feasibility, etc) may guide the partitioning of the space (relates to Nok's 500m examples).

## Possible questions/ideas

- It is said that
*Synthesis*constructs FSA as the solution of a two-player game. Can this game be expressed as a min-max optimization problem? If so, starting from that interpretation, can the problem be simplified/restructured for more efficient/smaller FSA construction?[hadas: Question: what is the benefit of expressing the game as a min max optimization problem? The current construction of the full strategy (the full BDD) does not consider explicit states but rather groups expressed as BDDs] - Is it possible to construct the FSA partially? [hadas: Again, my concern is the guarantee on the global behavior - need to think about what is lost.]
- Drive the construction with some objective. Imposing some kind of optimality may reduce the number of states to be visited.

- Varying detail construction: Finer requirements/partitioning close to the initial point and reduced details away from the initial point (just enough detail to ``guarantee" some ``safety" properties). Nok's 500m examples motivates this.
- How to reason about the correctness of the composition of two FSA which are correct-by-design? Conversely, given the high level system specification/requirement, how to come up with the requirements for each mode? For example, given the high level requirements for Alice (go from one checkpoint to another one, avoid obstacles, etc) how to come up with the requirements for the intersection, road, and zone modes and the transitions between these modes?

[hadas: I wouldn't call it a mode but the fundamental question is a hard and important one - how do we automatically partition a task into (disjoint?) subtasks such that the composition of the resulting small FSAs satisfy the original large task]