A Modified Residual Extreme Learning Machine Algorithm and Its Application
Extreme learning machine (ELM) is a fast learning algorithm for the single-hidden layer feedforward neural networks. However, usually we cannot guarantee the stability of the ELM because the parameters of the ELM are generated randomly, such as its biases of the hidden layer and the connecting weigh...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2018-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8493516/ |
id |
doaj-4d12c0e99aa14052bd279869ffac1cdd |
---|---|
record_format |
Article |
spelling |
doaj-4d12c0e99aa14052bd279869ffac1cdd2021-03-29T21:27:09ZengIEEEIEEE Access2169-35362018-01-016622156222310.1109/ACCESS.2018.28763608493516A Modified Residual Extreme Learning Machine Algorithm and Its ApplicationSen Zhang0https://orcid.org/0000-0002-8010-6045Zheng Liu1Xuejiao Huang2Wendong Xiao3https://orcid.org/0000-0002-2270-7889Key Laboratory of Knowledge Automation for Industrial Processes, Ministry of Education, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, ChinaKey Laboratory of Knowledge Automation for Industrial Processes, Ministry of Education, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, ChinaKey Laboratory of Knowledge Automation for Industrial Processes, Ministry of Education, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, ChinaKey Laboratory of Knowledge Automation for Industrial Processes, Ministry of Education, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, ChinaExtreme learning machine (ELM) is a fast learning algorithm for the single-hidden layer feedforward neural networks. However, usually we cannot guarantee the stability of the ELM because the parameters of the ELM are generated randomly, such as its biases of the hidden layer and the connecting weights between the input layer and the hidden layer. Besides, it is hard for a single model to achieve high predicted accuracy on the dataset with low-quality data. In this paper, we first propose a modified residual ELM (R-ELM) to improve the ELM's learning performance. In R-ELM, the first ELM is trained by the original dataset and the m-th (m > 1) ELM will be trained by the residuals between the ground truths and the predicted results of the previous ensemble model (with m - 1 ELMs). R-ELM (with m ELMs) is built in the direction of the error reduction by calculating the m-th ELM's optimal weight which is determined by the loss function of the R-ELM. As a result, R-ELM can remember almost all information of the training set. However, this ability does not assure a similar performance on the testing dataset. In view of this problem, we add L<sub>2</sub> regularization to the loss function of the R-ELM (RR-ELM) to avoid the overfitting problem of R-ELM. In RR-ELM, L<sub>2</sub> regularization is employed to encourage each ELM to ignore the unnecessary information of the training set. In order to verify the effectiveness of the two proposed algorithms, the real data from the blast furnace are engaged to perform the experiments. Experimental results illustrate that the proposed RR-ELM and the R-ELM are more stable than the single ELM. These results also demonstrate that the two proposed methods are more accurate than the average outputs of a group of ELMs, the ELM, and the support vector regression.https://ieeexplore.ieee.org/document/8493516/ELMresidual learningR-ELM<italic xmlns:ali="http://www.niso.org/schemas/ali/1.0/" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">L</italic>₂ regularizationRR-ELM |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Sen Zhang Zheng Liu Xuejiao Huang Wendong Xiao |
spellingShingle |
Sen Zhang Zheng Liu Xuejiao Huang Wendong Xiao A Modified Residual Extreme Learning Machine Algorithm and Its Application IEEE Access ELM residual learning R-ELM <italic xmlns:ali="http://www.niso.org/schemas/ali/1.0/" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">L</italic>₂ regularization RR-ELM |
author_facet |
Sen Zhang Zheng Liu Xuejiao Huang Wendong Xiao |
author_sort |
Sen Zhang |
title |
A Modified Residual Extreme Learning Machine Algorithm and Its Application |
title_short |
A Modified Residual Extreme Learning Machine Algorithm and Its Application |
title_full |
A Modified Residual Extreme Learning Machine Algorithm and Its Application |
title_fullStr |
A Modified Residual Extreme Learning Machine Algorithm and Its Application |
title_full_unstemmed |
A Modified Residual Extreme Learning Machine Algorithm and Its Application |
title_sort |
modified residual extreme learning machine algorithm and its application |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2018-01-01 |
description |
Extreme learning machine (ELM) is a fast learning algorithm for the single-hidden layer feedforward neural networks. However, usually we cannot guarantee the stability of the ELM because the parameters of the ELM are generated randomly, such as its biases of the hidden layer and the connecting weights between the input layer and the hidden layer. Besides, it is hard for a single model to achieve high predicted accuracy on the dataset with low-quality data. In this paper, we first propose a modified residual ELM (R-ELM) to improve the ELM's learning performance. In R-ELM, the first ELM is trained by the original dataset and the m-th (m > 1) ELM will be trained by the residuals between the ground truths and the predicted results of the previous ensemble model (with m - 1 ELMs). R-ELM (with m ELMs) is built in the direction of the error reduction by calculating the m-th ELM's optimal weight which is determined by the loss function of the R-ELM. As a result, R-ELM can remember almost all information of the training set. However, this ability does not assure a similar performance on the testing dataset. In view of this problem, we add L<sub>2</sub> regularization to the loss function of the R-ELM (RR-ELM) to avoid the overfitting problem of R-ELM. In RR-ELM, L<sub>2</sub> regularization is employed to encourage each ELM to ignore the unnecessary information of the training set. In order to verify the effectiveness of the two proposed algorithms, the real data from the blast furnace are engaged to perform the experiments. Experimental results illustrate that the proposed RR-ELM and the R-ELM are more stable than the single ELM. These results also demonstrate that the two proposed methods are more accurate than the average outputs of a group of ELMs, the ELM, and the support vector regression. |
topic |
ELM residual learning R-ELM <italic xmlns:ali="http://www.niso.org/schemas/ali/1.0/" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">L</italic>₂ regularization RR-ELM |
url |
https://ieeexplore.ieee.org/document/8493516/ |
work_keys_str_mv |
AT senzhang amodifiedresidualextremelearningmachinealgorithmanditsapplication AT zhengliu amodifiedresidualextremelearningmachinealgorithmanditsapplication AT xuejiaohuang amodifiedresidualextremelearningmachinealgorithmanditsapplication AT wendongxiao amodifiedresidualextremelearningmachinealgorithmanditsapplication AT senzhang modifiedresidualextremelearningmachinealgorithmanditsapplication AT zhengliu modifiedresidualextremelearningmachinealgorithmanditsapplication AT xuejiaohuang modifiedresidualextremelearningmachinealgorithmanditsapplication AT wendongxiao modifiedresidualextremelearningmachinealgorithmanditsapplication |
_version_ |
1724192867113500672 |