Near-optimal no-regret algorithms for zero-sum
We propose a new no-regret learning algorithm. When used against an adversary, our algorithm achieves average regret that scales as O (1/√T) with the number T of rounds. This regret bound is optimal but not rare, as there are a multitude of learning algorithms with this regret guarantee. However, wh...
Main Authors: | , , |
---|---|
Other Authors: | , |
Format: | Article |
Language: | English |
Published: |
Society for Industrial and Applied Mathematics,
2012-09-21T15:32:49Z.
|
Subjects: | |
Online Access: | Get fulltext |
Summary: | We propose a new no-regret learning algorithm. When used against an adversary, our algorithm achieves average regret that scales as O (1/√T) with the number T of rounds. This regret bound is optimal but not rare, as there are a multitude of learning algorithms with this regret guarantee. However, when our algorithm is used by both players of a zero-sum game, their average regret scales as O (ln T/T), guaranteeing a near-linear rate of convergence to the value of the game. This represents an almost-quadratic improvement on the rate of convergence to the value of a game known to be achieved by any no-regret learning algorithm, and is essentially optimal as we show a lower bound of Ω (1/T). Moreover, the dynamics produced by our algorithm in the game setting are strongly-uncoupled in that each player is oblivious to the payoff matrix of the game and the number of strategies of the other player, has limited private storage, and is not allowed funny bit arithmetic that can trivialize the problem; instead he only observes the performance of his strategies against the actions of the other player and can use private storage to remember past played strategies and observed payoffs, or cumulative information thereof. Here, too, our rate of convergence is nearly-optimal and represents an almost-quadratic improvement over the best previously known strongly-uncoupled dynamics. National Science Foundation (U.S.) (CAREER Award CCF-0953960) |
---|