Learning efficient random maximum a-posteriori predictors with non-decomposable loss functions

In this work we develop efficient methods for learning random MAP predictors for structured label problems. In particular, we construct posterior distributions over perturbations that can be adjusted via stochastic gradient methods. We show that every smooth posterior distribution would suffice to d...

Full description

Bibliographic Details
Main Authors: Hazan, Tamir (Author), Maji, Subhransu (Author), Keshet, Joseph (Author), Jaakkola, Tommi S. (Contributor)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor), Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor)
Format: Article
Language:English
Published: Neural Information Processing Systems, 2015-12-17T00:46:23Z.
Subjects:
Online Access:Get fulltext
LEADER 01674 am a22002173u 4500
001 100402
042 |a dc 
100 1 0 |a Hazan, Tamir  |e author 
100 1 0 |a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory  |e contributor 
100 1 0 |a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science  |e contributor 
100 1 0 |a Jaakkola, Tommi S.  |e contributor 
700 1 0 |a Maji, Subhransu  |e author 
700 1 0 |a Keshet, Joseph  |e author 
700 1 0 |a Jaakkola, Tommi S.  |e author 
245 0 0 |a Learning efficient random maximum a-posteriori predictors with non-decomposable loss functions 
260 |b Neural Information Processing Systems,   |c 2015-12-17T00:46:23Z. 
856 |z Get fulltext  |u http://hdl.handle.net/1721.1/100402 
520 |a In this work we develop efficient methods for learning random MAP predictors for structured label problems. In particular, we construct posterior distributions over perturbations that can be adjusted via stochastic gradient methods. We show that every smooth posterior distribution would suffice to define a smooth PAC-Bayesian risk bound suitable for gradient methods. In addition, we relate the posterior distributions to computational properties of the MAP predictors. We suggest multiplicative posteriors to learn super-modular potential functions that accompany specialized MAP predictors such as graph-cuts. We also describe label-augmented posterior models that can use efficient MAP approximations, such as those arising from linear program relaxations. 
546 |a en_US 
655 7 |a Article 
773 |t Advances in Neural Information Processing Systems (NIPS)