Corporate default prediction with adaboost and bagging classifiers

This study aims to show a substitute technique to corporate default prediction. Data mining techniques have been extensively applied for this task, due to its ability to notice non-linear relationships and show a good performance in presence of noisy information, as it usually happens in corporate d...

Full description

Bibliographic Details
Main Authors: Ramakrishnan, Suresh (Author), Mirzaei, Maryam (Author), Bekri, Mahmoud (Author)
Format: Article
Language:English
Published: Penerbit UTM Press, 2015.
Subjects:
Online Access:Get fulltext
LEADER 01666 am a22001573u 4500
001 58171
042 |a dc 
100 1 0 |a Ramakrishnan, Suresh  |e author 
700 1 0 |a Mirzaei, Maryam  |e author 
700 1 0 |a Bekri, Mahmoud  |e author 
245 0 0 |a Corporate default prediction with adaboost and bagging classifiers 
260 |b Penerbit UTM Press,   |c 2015. 
856 |z Get fulltext  |u http://eprints.utm.my/id/eprint/58171/1/SureshRamakrishnan2015_CorporateDefaultPrediction.pdf 
520 |a This study aims to show a substitute technique to corporate default prediction. Data mining techniques have been extensively applied for this task, due to its ability to notice non-linear relationships and show a good performance in presence of noisy information, as it usually happens in corporate default prediction problems. In spite of several progressive methods that have widely been proposed, this area of research is not out dated and still needs further examination. In this paper, the performance of ensemble classifier systems is assessed in terms of their capability to appropriately classify default and non-default Malaysian firms listed in Bursa Malaysia. AdaBoost and Bagging are novel ensemble learning algorithms that construct the base classifiers in sequence using different versions of the training data set. In this paper, we compare the prediction accuracy of both techniques and single classifiers on a set of Malaysian firms, considering the usual predicting variables such as financial ratios. We show that our approach decreases the generalization error by about thirty percent with respect to the error produced with a single classifier. 
546 |a en 
650 0 4 |a HG Finance