Summary: | In this work we study and develop learning algorithms for networks based on regularization theory. In particular, we focus on learning possibilities for a family of regularization networks and radial basis function networks (RBF networks). The framework above the basic algorithm derived from theory is designed. It includes an estimation of a regularization parameter and a kernel function by minimization of cross-validation error. Two composite types of kernel functions are proposed - a sum kernel and a product kernel - in order to deal with heterogenous or large data. Three learning approaches for the RBF networks - the gradient learning, three-step learning, and genetic learning - are discussed. Based on the se, two hybrid approaches are proposed - the four-step learning and the hybrid genetic learning. All learning algorithms for the regularization networks and the RBF networks are studied experimentally and thoroughly compared. We claim that the regularization networks and the RBF networks are comparable in terms of generalization error, but they differ with respect to their model complexity. The regularization network approach usually leads to solutions with higher number of base units, thus, the RBF networks can be used as a 'cheaper' alternative in terms of model size and learning time.
|