Browse wiki
From MurrayWiki
RiskAverse Planning Under Uncertainty 
Abstract 
We consider the problem of designing polic … We consider the problem of designing policies for partially observable Markov decision processes (POMDPs) with dynamic coherent risk objectives. Synthesizing riskaverse optimal policies for POMDPs requires infinite memory and thus undecidable. To overcome this difficulty, we propose a method based on bounded policy iteration for designing stochastic but finite state (memory) controllers, which takes advantage of standard convex optimization methods. Given a memory budget and optimality criterion, the proposed method modifies the stochastic finite state controller leading to suboptimal solutions with lower coherent risk. ptimal solutions with lower coherent risk. +


Authors  Mohamadreza Ahmadi, Masahiro Ono, Michel D. Ingham, Richard M. Murray, Aaron D. Ames + 
ID  2019l + 
Source  2020 American Control Conference (ACC) + 
Tag  ahm+20acc + 
Title  RiskAverse Planning Under Uncertainty + 
Type  Conference paper + 
Categories  Papers 
Modification date This property is a special property in this wiki.

26 May 2020 05:57:43 + 
URL This property is a special property in this wiki.

https://arxiv.org/abs/1909.12499 + 
hide properties that link here 
RiskAverse Planning Under Uncertainty +  Title 
