Browse wiki

From MurrayWiki
Jump to: navigation, search
Risk-Averse Planning Under Uncertainty
Abstract We consider the problem of designing polic …
We consider the problem of designing policies for partially observable Markov decision processes (POMDPs) with dynamic coherent risk objectives. Synthesizing risk-averse optimal policies for POMDPs requires infinite memory and thus undecidable. To overcome this difficulty, we propose a method based on bounded policy iteration for designing stochastic but finite state (memory) controllers, which takes advantage of standard convex optimization methods. Given a memory budget and optimality criterion, the proposed method modifies the stochastic finite state controller leading to sub-optimal solutions with lower coherent risk.
ptimal solutions with lower coherent risk.  +
Authors Mohamadreza Ahmadi, Masahiro Ono, Michel D. Ingham, Richard M. Murray, Aaron D. Ames  +
ID 2019l  +
Source 2020 American Control Conference (ACC)  +
Tag ahm+20-acc  +
Title Risk-Averse Planning Under Uncertainty +
Type Conference paper  +
Categories Papers
Modification date
This property is a special property in this wiki.
26 May 2020 05:57:43  +
URL
This property is a special property in this wiki.
https://arxiv.org/abs/1909.12499  +
hide properties that link here 
Risk-Averse Planning Under Uncertainty + Title
 

 

Enter the name of the page to start browsing from.