An Efficient Algorithm for the Incremental Broad Learning System by Inverse Cholesky Factorization of a Partitioned Matrix
In this paper, we propose an efficient algorithm to accelerate the existing Broad Learning System (BLS) algorithm for new added nodes. The existing BLS algorithm computes the output weights from the pseudoinverse with the ridge regression approximation, and updates the pseudoinverse iteratively. As...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9326429/ |
id |
doaj-ddb72c27b20d4877895f359987d641bd |
---|---|
record_format |
Article |
spelling |
doaj-ddb72c27b20d4877895f359987d641bd2021-03-30T15:17:27ZengIEEEIEEE Access2169-35362021-01-019192941930310.1109/ACCESS.2021.30521029326429An Efficient Algorithm for the Incremental Broad Learning System by Inverse Cholesky Factorization of a Partitioned MatrixHufei Zhu0https://orcid.org/0000-0002-1629-6227Zhulin Liu1https://orcid.org/0000-0003-4145-823XC. L. Philip Chen2Yanyang Liang3Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, ChinaSchool of Computer Science and Engineering, South China University of Technology, Guangzhou, ChinaSchool of Computer Science and Engineering, South China University of Technology, Guangzhou, ChinaFaculty of Intelligent Manufacturing, Wuyi University, Jiangmen, ChinaIn this paper, we propose an efficient algorithm to accelerate the existing Broad Learning System (BLS) algorithm for new added nodes. The existing BLS algorithm computes the output weights from the pseudoinverse with the ridge regression approximation, and updates the pseudoinverse iteratively. As a comparison, the proposed BLS algorithm computes the output weights from the inverse Cholesky factor of the Hermitian matrix in the calculation of the pseudoinverse, and updates the inverse Cholesky factor efficiently. Since the Hermitian matrix in the definition of the pseudoinverse is smaller than the pseudoinverse, the proposed BLS algorithm can reduce the computational complexity, and usually requires less than $\frac {2}{3}$ of complexities with respect to the existing BLS algorithm. Our experiments on the Modified National Institute of Standards and Technology (MNIST) dataset show that the speedups in accumulative training time and each additional training time of the proposed BLS over the existing BLS are 24.81%~ 37.99% and 36.45%~ 58.96%, respectively, and the speedup in total training time is 37.99%. In our experiments, the proposed BLS and the existing BLS both achieve the same testing accuracy when the tiny differences (≤ 0.05%) caused by the numerical errors are neglected, and the above-mentioned tiny differences and numerical errors become zeroes and ignorable, respectively, when the ridge parameter is not too small.https://ieeexplore.ieee.org/document/9326429/Broad learning system (BLS)incremental learningadded nodesrandom vector functional-link neural networks (RVFLNN)single layer feedforward neural networks (SLFN)efficient algorithms |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Hufei Zhu Zhulin Liu C. L. Philip Chen Yanyang Liang |
spellingShingle |
Hufei Zhu Zhulin Liu C. L. Philip Chen Yanyang Liang An Efficient Algorithm for the Incremental Broad Learning System by Inverse Cholesky Factorization of a Partitioned Matrix IEEE Access Broad learning system (BLS) incremental learning added nodes random vector functional-link neural networks (RVFLNN) single layer feedforward neural networks (SLFN) efficient algorithms |
author_facet |
Hufei Zhu Zhulin Liu C. L. Philip Chen Yanyang Liang |
author_sort |
Hufei Zhu |
title |
An Efficient Algorithm for the Incremental Broad Learning System by Inverse Cholesky Factorization of a Partitioned Matrix |
title_short |
An Efficient Algorithm for the Incremental Broad Learning System by Inverse Cholesky Factorization of a Partitioned Matrix |
title_full |
An Efficient Algorithm for the Incremental Broad Learning System by Inverse Cholesky Factorization of a Partitioned Matrix |
title_fullStr |
An Efficient Algorithm for the Incremental Broad Learning System by Inverse Cholesky Factorization of a Partitioned Matrix |
title_full_unstemmed |
An Efficient Algorithm for the Incremental Broad Learning System by Inverse Cholesky Factorization of a Partitioned Matrix |
title_sort |
efficient algorithm for the incremental broad learning system by inverse cholesky factorization of a partitioned matrix |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2021-01-01 |
description |
In this paper, we propose an efficient algorithm to accelerate the existing Broad Learning System (BLS) algorithm for new added nodes. The existing BLS algorithm computes the output weights from the pseudoinverse with the ridge regression approximation, and updates the pseudoinverse iteratively. As a comparison, the proposed BLS algorithm computes the output weights from the inverse Cholesky factor of the Hermitian matrix in the calculation of the pseudoinverse, and updates the inverse Cholesky factor efficiently. Since the Hermitian matrix in the definition of the pseudoinverse is smaller than the pseudoinverse, the proposed BLS algorithm can reduce the computational complexity, and usually requires less than $\frac {2}{3}$ of complexities with respect to the existing BLS algorithm. Our experiments on the Modified National Institute of Standards and Technology (MNIST) dataset show that the speedups in accumulative training time and each additional training time of the proposed BLS over the existing BLS are 24.81%~ 37.99% and 36.45%~ 58.96%, respectively, and the speedup in total training time is 37.99%. In our experiments, the proposed BLS and the existing BLS both achieve the same testing accuracy when the tiny differences (≤ 0.05%) caused by the numerical errors are neglected, and the above-mentioned tiny differences and numerical errors become zeroes and ignorable, respectively, when the ridge parameter is not too small. |
topic |
Broad learning system (BLS) incremental learning added nodes random vector functional-link neural networks (RVFLNN) single layer feedforward neural networks (SLFN) efficient algorithms |
url |
https://ieeexplore.ieee.org/document/9326429/ |
work_keys_str_mv |
AT hufeizhu anefficientalgorithmfortheincrementalbroadlearningsystembyinversecholeskyfactorizationofapartitionedmatrix AT zhulinliu anefficientalgorithmfortheincrementalbroadlearningsystembyinversecholeskyfactorizationofapartitionedmatrix AT clphilipchen anefficientalgorithmfortheincrementalbroadlearningsystembyinversecholeskyfactorizationofapartitionedmatrix AT yanyangliang anefficientalgorithmfortheincrementalbroadlearningsystembyinversecholeskyfactorizationofapartitionedmatrix AT hufeizhu efficientalgorithmfortheincrementalbroadlearningsystembyinversecholeskyfactorizationofapartitionedmatrix AT zhulinliu efficientalgorithmfortheincrementalbroadlearningsystembyinversecholeskyfactorizationofapartitionedmatrix AT clphilipchen efficientalgorithmfortheincrementalbroadlearningsystembyinversecholeskyfactorizationofapartitionedmatrix AT yanyangliang efficientalgorithmfortheincrementalbroadlearningsystembyinversecholeskyfactorizationofapartitionedmatrix |
_version_ |
1724179810974957568 |