Consistency of Learning Bayesian Network Structures with Continuous Variables: An Information Theoretic Approach

We consider the problem of learning a Bayesian network structure given n examples and the prior probability based on maximizing the posterior probability. We propose an algorithm that runs in O(n log n) time and that addresses continuous variables and discrete variables without assuming any class of...

Full description

Bibliographic Details
Main Author: Joe Suzuki
Format: Article
Language:English
Published: MDPI AG 2015-08-01
Series:Entropy
Subjects:
Online Access:http://www.mdpi.com/1099-4300/17/8/5752
Description
Summary:We consider the problem of learning a Bayesian network structure given n examples and the prior probability based on maximizing the posterior probability. We propose an algorithm that runs in O(n log n) time and that addresses continuous variables and discrete variables without assuming any class of distribution. We prove that the decision is strongly consistent, i.e., correct with probability one as n ! 1. To date, consistency has only been obtained for discrete variables for this class of problem, and many authors have attempted to prove consistency when continuous variables are present. Furthermore, we prove that the “log n” term that appears in the penalty term of the description length can be replaced by 2(1+ε) log log n to obtain strong consistency, where ε > 0 is arbitrary, which implies that the Hannan–Quinn proposition holds.
ISSN:1099-4300