|
|
|
|
LEADER |
03117 am a22003253u 4500 |
001 |
73608 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Tan, Vincent Yan Fu
|e author
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
|e contributor
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
|e contributor
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
|e contributor
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Stochastic Systems Group
|e contributor
|
100 |
1 |
0 |
|a Tan, Vincent Yan Fu
|e contributor
|
100 |
1 |
0 |
|a Fisher, John W., III
|e contributor
|
100 |
1 |
0 |
|a Willsky, Alan S.
|e contributor
|
700 |
1 |
0 |
|a Sanghavi, Sujay
|e author
|
700 |
1 |
0 |
|a Fisher, John W., III
|e author
|
700 |
1 |
0 |
|a Willsky, Alan S.
|e author
|
245 |
0 |
0 |
|a Learning graphical models for hypothesis testing and classification
|
260 |
|
|
|b Institute of Electrical and Electronics Engineers (IEEE),
|c 2012-10-04T16:50:38Z.
|
856 |
|
|
|z Get fulltext
|u http://hdl.handle.net/1721.1/73608
|
520 |
|
|
|a Sparse graphical models have proven to be a flexible class of multivariate probability models for approximating high-dimensional distributions. In this paper, we propose techniques to exploit this modeling ability for binary classification by discriminatively learning such models from labeled training data, i.e., using both positive and negative samples to optimize for the structures of the two models. We motivate why it is difficult to adapt existing generative methods, and propose an alternative method consisting of two parts. First, we develop a novel method to learn tree-structured graphical models which optimizes an approximation of the log-likelihood ratio. We also formulate a joint objective to learn a nested sequence of optimal forests-structured models. Second, we construct a classifier by using ideas from boosting to learn a set of discriminative trees. The final classifier can interpreted as a likelihood ratio test between two models with a larger set of pairwise features. We use cross-validation to determine the optimal number of edges in the final model. The algorithm presented in this paper also provides a method to identify a subset of the edges that are most salient for discrimination. Experiments show that the proposed procedure outperforms generative methods such as Tree Augmented Naïve Bayes and Chow-Liu as well as their boosted counterparts.
|
520 |
|
|
|a United States. Air Force Office of Scientific Research (Grant FA9550-08-1-1080)
|
520 |
|
|
|a United States. Army Research Office. Multidisciplinary University Research Initiative (Grant W911NF-06-1-0076)
|
520 |
|
|
|a United States. Air Force Office of Scientific Research. Multidisciplinary University Research Initiative (Grant FA9550-06-1-0324)
|
520 |
|
|
|a Singapore. Agency for Science, Technology and Research
|
520 |
|
|
|a United States. Air Force Research Laboratory (Award No. FA8650-07-D-1220)
|
546 |
|
|
|a en_US
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t IEEE Transactions on Signal Processing
|