|
|
|
|
LEADER |
01891 am a22001933u 4500 |
001 |
75033 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Desai, Vijay V.
|e author
|
100 |
1 |
0 |
|a Sloan School of Management
|e contributor
|
100 |
1 |
0 |
|a Farias, Vivek F.
|e contributor
|
700 |
1 |
0 |
|a Farias, Vivek F.
|e author
|
700 |
1 |
0 |
|a Moallemi, Ciamac C.
|e author
|
245 |
0 |
0 |
|a Approximate Dynamic Programming via a Smoothed Linear Program
|
260 |
|
|
|b Institute for Operations Research and the Management Sciences (INFORMS),
|c 2012-11-27T17:44:05Z.
|
856 |
|
|
|z Get fulltext
|u http://hdl.handle.net/1721.1/75033
|
520 |
|
|
|a We present a novel linear program for the approximation of the dynamic programming cost-to-go function in high-dimensional stochastic control problems. LP approaches to approximate DP have typically relied on a natural "projection" of a well-studied linear program for exact dynamic programming. Such programs restrict attention to approximations that are lower bounds to the optimal cost-to-go function. Our program-the "smoothed approximate linear program"-is distinct from such approaches and relaxes the restriction to lower bounding approximations in an appropriate fashion while remaining computationally tractable. Doing so appears to have several advantages: First, we demonstrate bounds on the quality of approximation to the optimal cost-to-go function afforded by our approach. These bounds are, in general, no worse than those available for extant LP approaches and for specific problem instances can be shown to be arbitrarily stronger. Second, experiments with our approach on a pair of challenging problems (the game of Tetris and a queueing network control problem) show that the approach outperforms the existing LP approach (which has previously been shown to be competitive with several ADP algorithms) by a substantial margin.
|
546 |
|
|
|a en_US
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t Operations Research
|