Approximation of the formal Bayesian model comparison using the extended conditional predictive ordinate criterion

The optimal method for Bayesian model comparison is the formal Bayes factor (BF), according to decision theory. The formal BF is computationally troublesome for more complex models. If predictive distributions under the competing models do not have a closed form, a cross-validation idea, called the...

Full description

Bibliographic Details
Main Author: Hoque, Md Rashedul
Language:English
Published: University of British Columbia 2017
Online Access:http://hdl.handle.net/2429/62844
id ndltd-UBC-oai-circle.library.ubc.ca-2429-62844
record_format oai_dc
spelling ndltd-UBC-oai-circle.library.ubc.ca-2429-628442018-01-05T17:30:01Z Approximation of the formal Bayesian model comparison using the extended conditional predictive ordinate criterion Hoque, Md Rashedul The optimal method for Bayesian model comparison is the formal Bayes factor (BF), according to decision theory. The formal BF is computationally troublesome for more complex models. If predictive distributions under the competing models do not have a closed form, a cross-validation idea, called the conditional predictive ordinate (CPO) criterion can be used. In the cross-validation sense, this is a ''leave-out one'' approach. CPO can be calculated directly from the Monte Carlo (MC) outputs, and the resulting Bayesian model comparison is called the pseudo Bayes factor (PBF). We can get closer to the formal Bayesian model comparison by increasing the ''leave-out size'', and at ''leave-out all'' we recover the formal BF. But, the MC error increases with increasing ''leave-out size''. In this study, we examine this for linear and logistic regression models. Our study reveals that the Bayesian model comparison can favour a different model for PBF compared to BF when comparing two close linear models. So, larger ''leave-out sizes'' are preferred which provide result close to the optimal BF. On the other hand, MC samples based formal Bayesian model comparisons are computed with more MC error for increasing ''leave-out sizes''; this is observed by comparing with the available closed form results. Still, considering a reasonable error, we can use ''leave-out size'' more than one instead of fixing it at one. These findings can be extended to logistic models where a closed form solution is unavailable. Science, Faculty of Statistics, Department of Graduate 2017-08-28T22:17:44Z 2017-08-28T22:17:44Z 2017 2017-11 Text Thesis/Dissertation http://hdl.handle.net/2429/62844 eng Attribution-NonCommercial-NoDerivatives 4.0 International http://creativecommons.org/licenses/by-nc-nd/4.0/ University of British Columbia
collection NDLTD
language English
sources NDLTD
description The optimal method for Bayesian model comparison is the formal Bayes factor (BF), according to decision theory. The formal BF is computationally troublesome for more complex models. If predictive distributions under the competing models do not have a closed form, a cross-validation idea, called the conditional predictive ordinate (CPO) criterion can be used. In the cross-validation sense, this is a ''leave-out one'' approach. CPO can be calculated directly from the Monte Carlo (MC) outputs, and the resulting Bayesian model comparison is called the pseudo Bayes factor (PBF). We can get closer to the formal Bayesian model comparison by increasing the ''leave-out size'', and at ''leave-out all'' we recover the formal BF. But, the MC error increases with increasing ''leave-out size''. In this study, we examine this for linear and logistic regression models. Our study reveals that the Bayesian model comparison can favour a different model for PBF compared to BF when comparing two close linear models. So, larger ''leave-out sizes'' are preferred which provide result close to the optimal BF. On the other hand, MC samples based formal Bayesian model comparisons are computed with more MC error for increasing ''leave-out sizes''; this is observed by comparing with the available closed form results. Still, considering a reasonable error, we can use ''leave-out size'' more than one instead of fixing it at one. These findings can be extended to logistic models where a closed form solution is unavailable. === Science, Faculty of === Statistics, Department of === Graduate
author Hoque, Md Rashedul
spellingShingle Hoque, Md Rashedul
Approximation of the formal Bayesian model comparison using the extended conditional predictive ordinate criterion
author_facet Hoque, Md Rashedul
author_sort Hoque, Md Rashedul
title Approximation of the formal Bayesian model comparison using the extended conditional predictive ordinate criterion
title_short Approximation of the formal Bayesian model comparison using the extended conditional predictive ordinate criterion
title_full Approximation of the formal Bayesian model comparison using the extended conditional predictive ordinate criterion
title_fullStr Approximation of the formal Bayesian model comparison using the extended conditional predictive ordinate criterion
title_full_unstemmed Approximation of the formal Bayesian model comparison using the extended conditional predictive ordinate criterion
title_sort approximation of the formal bayesian model comparison using the extended conditional predictive ordinate criterion
publisher University of British Columbia
publishDate 2017
url http://hdl.handle.net/2429/62844
work_keys_str_mv AT hoquemdrashedul approximationoftheformalbayesianmodelcomparisonusingtheextendedconditionalpredictiveordinatecriterion
_version_ 1718585913933561856