On measure concentration of random maximum a-posteriori perturbations

The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference and learning in high dimensional complex models. By maximizing a randomly perturbed potential function, MAP perturbations generate unbiased samples from the Gibbs distribution. Unfortunately, the com...

Full description

Bibliographic Details
Main Authors: Orabona, Francesco (Author), Hazan, Tamir (Author), Sarwate, Anand D. (Author), Jaakkola, Tommi S. (Contributor)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor), Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor)
Format: Article
Language:English
Published: Association for Computing Machinery (ACM), 2015-12-18T14:39:52Z.
Subjects:
Online Access:Get fulltext
LEADER 01811 am a22002173u 4500
001 100428
042 |a dc 
100 1 0 |a Orabona, Francesco  |e author 
100 1 0 |a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory  |e contributor 
100 1 0 |a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science  |e contributor 
100 1 0 |a Jaakkola, Tommi S.  |e contributor 
700 1 0 |a Hazan, Tamir  |e author 
700 1 0 |a Sarwate, Anand D.  |e author 
700 1 0 |a Jaakkola, Tommi S.  |e author 
245 0 0 |a On measure concentration of random maximum a-posteriori perturbations 
260 |b Association for Computing Machinery (ACM),   |c 2015-12-18T14:39:52Z. 
856 |z Get fulltext  |u http://hdl.handle.net/1721.1/100428 
520 |a The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference and learning in high dimensional complex models. By maximizing a randomly perturbed potential function, MAP perturbations generate unbiased samples from the Gibbs distribution. Unfortunately, the computational cost of generating so many high-dimensional random variables can be prohibitive. More efficient algorithms use sequential sampling strategies based on the expected value of low dimensional MAP perturbations. This paper develops new measure concentration inequalities that bound the number of samples needed to estimate such expected values. Applying the general result to MAP perturbations can yield a more efficient algorithm to approximate sampling from the Gibbs distribution. The measure concentration result is of general interest and may be applicable to other areas involving Monte Carlo estimation of expectations. 
546 |a en_US 
655 7 |a Article 
773 |t Journal of Machine Learning Research