Publications

Reinforcement Learning When All Actions are Not Always Available

Thirty-fourth Conference on Artificial Intelligence (AAAI 2020)

Published February 7, 2020

Yash Chandak, Georgios Theocharous, Blossom Metevier, Philip S. Thomas


Abstract

The Markov decision process (MDP) formulation used to model many real-world sequential decision making problems does not capture the setting where the set of available decisions (actions) at each time step is stochastic. Recently, the stochastic action set Markov decision process (SAS-MDP) formulation has been proposed, which captures the concept of a stochastic action set. In this paper we argue that existing RL algorithms for SAS-MDPs suffer from divergence issues, and present new algorithms for SAS-MDPs that incorporate variance reduction techniques unique to this setting, and provide conditions for their convergence. We conclude with experiments that demonstrate the practicality of our approaches using several tasks inspired by real-life use cases wherein the action set is stochastic.

Learn More