Information theoretic properties of Markov Random Fields, and their algorithmic applications

Markov random fields are a popular model for high-dimensional probability distributions. Over the years, many mathematical, statistical and algorithmic problems on them have been studied. Until recently, the only known algorithms for provably learning them relied on exhaustive search, correlation de...

Full description

Bibliographic Details
Main Authors: Hamilton, Linus Ulysses (Contributor), Koehler, Frederic (Contributor), Moitra, Ankur (Contributor)
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor), Massachusetts Institute of Technology. Department of Mathematics (Contributor)
Format: Article
Language:English
Published: 2018-06-11T18:02:46Z.
Subjects:
Online Access:Get fulltext
Description
Summary:Markov random fields are a popular model for high-dimensional probability distributions. Over the years, many mathematical, statistical and algorithmic problems on them have been studied. Until recently, the only known algorithms for provably learning them relied on exhaustive search, correlation decay or various incoherence assumptions. Bresler [1] gave an algorithm for learning general Ising models on bounded degree graphs. His approach was based on a structural result about mutual information in Ising models. Here we take a more conceptual approach to proving lower bounds on the mutual information. Our proof generalizes well beyond Ising models, to arbitrary Markov random fields with higher order interactions. As an application, we obtain algorithms for learning Markov random fields on bounded degree graphs on n nodes with r-order interactions in n r time and log n sample complexity. Our algorithms also extend to various partial observation models.