Learning determinantal point processes by corrective negative sampling

Determinantal Point Processes (DPPs) have attracted significant interest from the machine-learning community due to their ability to elegantly and tractably model the delicate balance between quality and diversity of sets. DPPs are commonly learned from data using maximum likelihood estimation (MLE)...

Full description

Bibliographic Details
Main Authors: Mariet, Zelda (Author), Gartrell, Mike (Author), Sra, Suvrit (Author)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor)
Format: Article
Language:English
Published: MLResearch Press, 2021-04-08T15:13:46Z.
Subjects:
Online Access:Get fulltext
LEADER 01842 am a22001813u 4500
001 130415
042 |a dc 
100 1 0 |a Mariet, Zelda  |e author 
100 1 0 |a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory  |e contributor 
700 1 0 |a Gartrell, Mike  |e author 
700 1 0 |a Sra, Suvrit  |e author 
245 0 0 |a Learning determinantal point processes by corrective negative sampling 
260 |b MLResearch Press,   |c 2021-04-08T15:13:46Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/130415 
520 |a Determinantal Point Processes (DPPs) have attracted significant interest from the machine-learning community due to their ability to elegantly and tractably model the delicate balance between quality and diversity of sets. DPPs are commonly learned from data using maximum likelihood estimation (MLE). While fitting observed sets well, MLE for DPPs may also assign high likelihoods to unobserved sets that are far from the true generative distribution of the data. To address this issue, which reduces the quality of the learned model, we introduce a novel optimization problem, Contrastive Estimation (CE), which encodes information about "negative" samples into the basic learning model. CE is grounded in the successful use of negative information in machine-vision and language modeling. Depending on the chosen negative distribution (which may be static or evolve during optimization), CE assumes two different forms, which we analyze theoretically and experimentally. We evaluate our new model on real-world datasets; on a challenging dataset, CE learning delivers a considerable improvement in predictive performance over a DPP learned without using contrastive information. 
546 |a en 
655 7 |a Article 
773 |t 22nd International Conference on Artificial Intelligence and Statistics