Ather Gattami, Feb 2019

From MurrayWiki
Revision as of 18:58, 19 February 2019 by Kcai (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Ather Gattami will visit Caltech on 19-20 Feb. Ather is a senior AI expert at RISE (Research Institutes of Sweden) AI. His research interests are within the mathematical foundations of Machine Learning in general and Deep Learning in particular, Reinforcement Learning, learning in dynamical systems and games, and low rank matrix approximation and completion problems, with applications to recommender systems, anomaly detection, predictive maintenance, and Natural Language.

Schedule

Tuesday, 19 Feb 2019:

  • 12 pm: Lunch with Richard (meet at 107 Steele)
  • 1:30 pm: open
  • 2:15 pm: open
  • 3:00 pm: open
  • 3:45 pm: open
  • 4:30 pm: open
  • 5:15 pm: done for the day

Wednesday, 20 Feb 2019:

  • 11 am: Seminar (abstract below)
  • 12 pm: Lunch (TBD)
  • 1:30 pm: Tung (Steele library)
  • 2:15 pm: Karena (Annenberg 331)
  • 3:00 pm: Richard C. (205 Gates-Thomas)
  • 3:45 pm: open
  • 4:30 pm: open
  • 5:15 pm: done for the day

Talk info

Reinforcement Learning for Constrained and Multi-Objective Markov Decision Processes
Ather Gattami, PhD
Wednesday, February 20, 11am
Annenberg 243

We consider the problem of optimization and learning for constrained and multi-objective Markov decision processes, for both discounted rewards and expected average rewards. We formulate the problems as zero-sum games where one player (the agent) solves a Markov decision problem and its opponent solves a bandit optimization problem, which we here call Markov-Bandit games. We extend $Q$-learning to solve Markov-Bandit games and show that our new $Q$-learning algorithms converge to the optimal solutions of the zero-sum Markov-Bandit games, and hence converge to the optimal solutions of the constrained and multi-objective Markov decision problems. We provide a numerical example where we calculate the optimal policies and show by simulations that the algorithm converges to the calculated optimal policies. To the best of our knowledge, this is the first time learning algorithms guarantee convergence to optimal stationary policies for the constrained MDP problem with discounted and expected average rewards, respectively.