Evaluating the Effect of Dataset Size on Predictive Model Using Supervised Learning Technique

Learning models used for prediction purposes are mostly developed without paying much cognizance to the size of datasets that can produce models of high accuracy and better generalization. Although, the general believe is that, large dataset is needed to construct a predictive learning model. To d...

Full description

Bibliographic Details
Main Authors: A. R. Ajiboye, R. Abdullah-Arshah, H. Qin, H. Isah-Kebbe
Format: Article
Language:English
Published: UMP Publisher 2015-02-01
Series:International Journal of Software Engineering and Computer Systems
Subjects:
Online Access:http://ijsecs.ump.edu.my/images/archive/vol1/06Ajiboye_IJSECS.pdf
Description
Summary:Learning models used for prediction purposes are mostly developed without paying much cognizance to the size of datasets that can produce models of high accuracy and better generalization. Although, the general believe is that, large dataset is needed to construct a predictive learning model. To describe a data set as large in size, perhaps, is circumstance dependent, thus, what constitutes a dataset to be considered as being big or small is vague. In this paper, the ability of the predictive model to generalize with respect to a particular size of data when simulated with new untrained input is examined. The study experiments on three different sizes of data using Matlab program to create predictive models with a view to establishing if the size of data has any effect on the accuracy of a model. The simulated output of each model is measured using the Mean Absolute Error (MAE) and comparisons are made. Findings from this study reveals that, the quantity of data partitioned for the purpose of training must be of good representation of the entire sets and sufficient enough to span through the input space. The results of simulating the three network models also shows that, the learning model with the largest size of training sets appears to be the most accurate and consistently delivers a much better and stable results.
ISSN:2289-8522
2180-0650