The efficiency of different search strategies in estimating parsimony jackknife, bootstrap, and Bremer support

<p>Abstract</p> <p>Background</p> <p>For parsimony analyses, the most common way to estimate confidence is by resampling plans (nonparametric bootstrap, jackknife), and Bremer support (Decay indices). The recent literature reveals that parameter settings that are quite...

Full description

Bibliographic Details
Main Author: Müller Kai F
Format: Article
Language:English
Published: BMC 2005-10-01
Series:BMC Evolutionary Biology
Online Access:http://www.biomedcentral.com/1471-2148/5/58
Description
Summary:<p>Abstract</p> <p>Background</p> <p>For parsimony analyses, the most common way to estimate confidence is by resampling plans (nonparametric bootstrap, jackknife), and Bremer support (Decay indices). The recent literature reveals that parameter settings that are quite commonly employed are not those that are recommended by theoretical considerations and by previous empirical studies. The optimal search strategy to be applied during resampling was previously addressed solely via standard search strategies available in PAUP*. The question of a compromise between search extensiveness and improved support accuracy for Bremer support received even less attention. A set of experiments was conducted on different datasets to find an empirical cut-off point at which increased search extensiveness does not significantly change Bremer support and jackknife or bootstrap proportions any more.</p> <p>Results</p> <p>For the number of replicates needed for accurate estimates of support in resampling plans, a diagram is provided that helps to address the question whether apparently different support values really differ significantly. It is shown that the use of random addition cycles and parsimony ratchet iterations during bootstrapping does not translate into higher support, nor does any extension of the search extensiveness beyond the rather moderate effort of TBR (tree bisection and reconnection branch swapping) plus saving one tree per replicate. Instead, in case of very large matrices, saving more than one shortest tree per iteration and using a strict consensus tree of these yields decreased support compared to saving only one tree. This can be interpreted as a small risk of overestimating support but should be more than compensated by other factors that counteract an enhanced type I error. With regard to Bremer support, a rule of thumb can be derived stating that not much is gained relative to the surplus computational effort when searches are extended beyond 20 ratchet iterations per constrained node, at least not for datasets that fall within the size range found in the current literature.</p> <p>Conclusion</p> <p>In view of these results, calculating bootstrap or jackknife proportions with narrow confidence intervals even for very large datasets can be achieved with less expense than often thought. In particular, iterated bootstrap methods that aim at reducing statistical bias inherent to these proportions are more feasible when the individual bootstrap searches require less time.</p>
ISSN:1471-2148