Publications

Improved Learning Complexity in Combinatorial Pure Exploration Bandits

Proceedings of the Nine- teenth International Conference on Artificial Intelligence and Statistics (AISTATS-2016), pp. 1004-1012, Cadiz, Spain, 2016.

Publication date: May 1, 2015

Victor Gabillon, Alessandro Lazaric, Mohammad Ghavamzadeh, Ronald Ortner, Peter Bartlett

We study the problem of combinatorial pure exploration in the stochastic multi-armed bandit problem. We first construct a new measure of complexity that provably characterizes the learning performance of the algorithms we propose for the fixed confidence and the fixed budget setting. We show that this complexity is never higher than the one in existing work and illustrate a number of configurations in which it can be significantly smaller. While in general this improvement comes at the cost of increased computational complexity, we provide a series of examples, including a planning problem, where this extra cost is not significant.

Learn More

Research Area:  Adobe Research iconAI & Machine Learning