Does Statistical Significance Help to Evaluate Predictive Performance of Competing Models?
In Monte Carlo experiment with simulated data, we show that as a point forecast criterion, the Clark and West's (2006) unconditional test of mean squared prediction errors does not reflect the relative performance of a superior model over a relatively weaker one. The simulation results show tha...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
Tripal Publishing House
2017-06-01
|
Series: | Journal of Economics and Financial Analysis |
Subjects: | |
Online Access: | http://ojs.tripaledu.com/jefa/article/download/3/1 |
Summary: | In Monte Carlo experiment with simulated data, we show that as a point forecast criterion, the Clark and West's (2006) unconditional test of mean squared prediction errors does not reflect the relative performance of a superior model over a relatively weaker one. The simulation results show that even though the mean squared prediction errors of a constructed superior model is far below a weaker alternative, the Clark- West test does not reflect this in their test statistics. Therefore, studies that use this statistic in testing the predictive accuracy of alternative exchange rate models, stock return predictability, inflation forecasting, and unemployment forecasting should not weight too much on the magnitude of the statistically significant Clark-West tests statistics. |
---|---|
ISSN: | 2521-6627 2521-6619 |