TY - JOUR
T1 - Crush optimism with pessimism
T2 - 34th Conference on Neural Information Processing Systems, NeurIPS 2020
AU - Jun, Kwang Sung
AU - Zhang, Chicheng
N1 - Funding Information: We thank the anonymous reviewers, Lalit Jain, Kevin Jamieson, Akshay Krishnamurthy, Tor Lattimore, Robert Nowak, Ardhendu Tripathy, and the organizers and participants of RL Theory Virtual Seminars for providing valuable feedback and helpful discussions. This work is supported in part by the startup fund from the University of Arizona. Publisher Copyright: © 2020 Neural information processing systems foundation. All rights reserved.
PY - 2020
Y1 - 2020
N2 - In this paper, we study stochastic structured bandits for minimizing regret. The fact that the popular optimistic algorithms do not achieve the asymptotic instance-dependent regret optimality (asymptotic optimality for short) has recently allured researchers. On the other hand, it is known that one can achieve a bounded regret (i.e., does not grow indefinitely with n) in certain instances. Unfortunately, existing asymptotically optimal algorithms rely on forced sampling that introduces an ?(1) term w.r.t. the time horizon n in their regret, failing to adapt to the “easiness” of the instance. In this paper, we focus on the finite hypothesis class and ask if one can achieve the asymptotic optimality while enjoying bounded regret whenever possible. We provide a positive answer by introducing a new algorithm called CRush Optimism with Pessimism (CROP) that eliminates optimistic hypotheses by pulling the informative arms indicated by a pessimistic hypothesis. Our finite-time analysis shows that CROP (i) achieves a constant-factor asymptotic optimality and, thanks to the forced-exploration-free design, (ii) adapts to bounded regret, and (iii) its regret bound scales not with the number of arms K but with an effective number of arms K? that we introduce. We also discuss a problem class where CROP can be exponentially better than existing algorithms in nonasymptotic regimes. Finally, we observe that even a clairvoyant oracle who plays according to the asymptotically optimal arm pull scheme may suffer a linear worst-case regret, indicating that it may not be the end of optimism.
AB - In this paper, we study stochastic structured bandits for minimizing regret. The fact that the popular optimistic algorithms do not achieve the asymptotic instance-dependent regret optimality (asymptotic optimality for short) has recently allured researchers. On the other hand, it is known that one can achieve a bounded regret (i.e., does not grow indefinitely with n) in certain instances. Unfortunately, existing asymptotically optimal algorithms rely on forced sampling that introduces an ?(1) term w.r.t. the time horizon n in their regret, failing to adapt to the “easiness” of the instance. In this paper, we focus on the finite hypothesis class and ask if one can achieve the asymptotic optimality while enjoying bounded regret whenever possible. We provide a positive answer by introducing a new algorithm called CRush Optimism with Pessimism (CROP) that eliminates optimistic hypotheses by pulling the informative arms indicated by a pessimistic hypothesis. Our finite-time analysis shows that CROP (i) achieves a constant-factor asymptotic optimality and, thanks to the forced-exploration-free design, (ii) adapts to bounded regret, and (iii) its regret bound scales not with the number of arms K but with an effective number of arms K? that we introduce. We also discuss a problem class where CROP can be exponentially better than existing algorithms in nonasymptotic regimes. Finally, we observe that even a clairvoyant oracle who plays according to the asymptotically optimal arm pull scheme may suffer a linear worst-case regret, indicating that it may not be the end of optimism.
UR - http://www.scopus.com/inward/record.url?scp=85101949761&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85101949761&partnerID=8YFLogxK
M3 - Conference article
SN - 1049-5258
VL - 2020-December
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
Y2 - 6 December 2020 through 12 December 2020
ER -