|
|
|
|
LEADER |
01650 am a22001693u 4500 |
001 |
137085 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Dubey, Abhimanyu
|e author
|
700 |
1 |
0 |
|a Pentland, Alex Sandy'
|e author
|
245 |
0 |
0 |
|a Thompson Sampling on Symmetric Alpha-Stable Bandits
|
260 |
|
|
|b International Joint Conferences on Artificial Intelligence,
|c 2021-11-02T14:15:18Z.
|
856 |
|
|
|z Get fulltext
|u https://hdl.handle.net/1721.1/137085
|
520 |
|
|
|a © 2019 International Joint Conferences on Artificial Intelligence. All rights reserved. Thompson Sampling provides an efficient technique to introduce prior knowledge in the multiarmed bandit problem, along with providing remarkable empirical performance. In this paper, we revisit the Thompson Sampling algorithm under rewards drawn from symmetric α-stable distributions, which are a class of heavy-tailed probability distributions utilized in finance and economics, in problems such as modeling stock prices and human behavior. We present an efficient framework for posterior inference, which leads to two algorithms for Thompson Sampling in this setting. We prove finite-time regret bounds for both algorithms, and demonstrate through a series of experiments the stronger performance of Thompson Sampling in this setting. With our results, we provide an exposition of symmetric α-stable distributions in sequential decision-making, and enable sequential Bayesian inference in applications from diverse fields in finance and complex systems that operate on heavy-tailed features.
|
546 |
|
|
|a en
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t 10.24963/IJCAI.2019/792
|
773 |
|
|
|t IJCAI International Joint Conference on Artificial Intelligence
|