Pattern Layer Reduction for a Generalized Regression Neural Network by Using a Self–Organizing Map

In a general regression neural network (GRNN), the number of neurons in the pattern layer is proportional to the number of training samples in the dataset. The use of a GRNN in applications that have relatively large datasets becomes troublesome due to the architecture and speed required. The great...

Full description

Bibliographic Details
Main Authors: Kartal Serkan, Oral Mustafa, Ozyildirim Buse Melis
Format: Article
Language:English
Published: Sciendo 2018-06-01
Series:International Journal of Applied Mathematics and Computer Science
Subjects:
Online Access:https://doi.org/10.2478/amcs-2018-0031
id doaj-07ee76afae9546b88ebc86a0a4fd2c39
record_format Article
spelling doaj-07ee76afae9546b88ebc86a0a4fd2c392021-09-06T19:41:09ZengSciendoInternational Journal of Applied Mathematics and Computer Science2083-84922018-06-0128241142410.2478/amcs-2018-0031amcs-2018-0031Pattern Layer Reduction for a Generalized Regression Neural Network by Using a Self–Organizing MapKartal Serkan0Oral Mustafa1Ozyildirim Buse Melis2Department of Computer Engineering University of Cukurova, 01330 Balcali, Saricam/Adana, TurkeyDepartment of Computer Engineering University of Cukurova, 01330 Balcali, Saricam/Adana, TurkeyDepartment of Computer Engineering University of Cukurova, 01330 Balcali, Saricam/Adana, TurkeyIn a general regression neural network (GRNN), the number of neurons in the pattern layer is proportional to the number of training samples in the dataset. The use of a GRNN in applications that have relatively large datasets becomes troublesome due to the architecture and speed required. The great number of neurons in the pattern layer requires a substantial increase in memory usage and causes a substantial decrease in calculation speed. Therefore, there is a strong need for pattern layer size reduction. In this study, a self-organizing map (SOM) structure is introduced as a pre-processor for the GRNN. First, an SOM is generated for the training dataset. Second, each training record is labelled with the most similar map unit. Lastly, when a new test record is applied to the network, the most similar map units are detected, and the training data that have the same labels as the detected units are fed into the network instead of the entire training dataset. This scheme enables a considerable reduction in the pattern layer size. The proposed hybrid model was evaluated by using fifteen benchmark test functions and eight different UCI datasets. According to the simulation results, the proposed model significantly simplifies the GRNN’s structure without any performance loss.https://doi.org/10.2478/amcs-2018-0031generalized regression neural networkartificial neural networkself-organizing mapsnearest neighbourreduced dataset
collection DOAJ
language English
format Article
sources DOAJ
author Kartal Serkan
Oral Mustafa
Ozyildirim Buse Melis
spellingShingle Kartal Serkan
Oral Mustafa
Ozyildirim Buse Melis
Pattern Layer Reduction for a Generalized Regression Neural Network by Using a Self–Organizing Map
International Journal of Applied Mathematics and Computer Science
generalized regression neural network
artificial neural network
self-organizing maps
nearest neighbour
reduced dataset
author_facet Kartal Serkan
Oral Mustafa
Ozyildirim Buse Melis
author_sort Kartal Serkan
title Pattern Layer Reduction for a Generalized Regression Neural Network by Using a Self–Organizing Map
title_short Pattern Layer Reduction for a Generalized Regression Neural Network by Using a Self–Organizing Map
title_full Pattern Layer Reduction for a Generalized Regression Neural Network by Using a Self–Organizing Map
title_fullStr Pattern Layer Reduction for a Generalized Regression Neural Network by Using a Self–Organizing Map
title_full_unstemmed Pattern Layer Reduction for a Generalized Regression Neural Network by Using a Self–Organizing Map
title_sort pattern layer reduction for a generalized regression neural network by using a self–organizing map
publisher Sciendo
series International Journal of Applied Mathematics and Computer Science
issn 2083-8492
publishDate 2018-06-01
description In a general regression neural network (GRNN), the number of neurons in the pattern layer is proportional to the number of training samples in the dataset. The use of a GRNN in applications that have relatively large datasets becomes troublesome due to the architecture and speed required. The great number of neurons in the pattern layer requires a substantial increase in memory usage and causes a substantial decrease in calculation speed. Therefore, there is a strong need for pattern layer size reduction. In this study, a self-organizing map (SOM) structure is introduced as a pre-processor for the GRNN. First, an SOM is generated for the training dataset. Second, each training record is labelled with the most similar map unit. Lastly, when a new test record is applied to the network, the most similar map units are detected, and the training data that have the same labels as the detected units are fed into the network instead of the entire training dataset. This scheme enables a considerable reduction in the pattern layer size. The proposed hybrid model was evaluated by using fifteen benchmark test functions and eight different UCI datasets. According to the simulation results, the proposed model significantly simplifies the GRNN’s structure without any performance loss.
topic generalized regression neural network
artificial neural network
self-organizing maps
nearest neighbour
reduced dataset
url https://doi.org/10.2478/amcs-2018-0031
work_keys_str_mv AT kartalserkan patternlayerreductionforageneralizedregressionneuralnetworkbyusingaselforganizingmap
AT oralmustafa patternlayerreductionforageneralizedregressionneuralnetworkbyusingaselforganizingmap
AT ozyildirimbusemelis patternlayerreductionforageneralizedregressionneuralnetworkbyusingaselforganizingmap
_version_ 1717766952615673856