A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning
Nowadays, deep representations have been attracting much attention owing to the great performance in various tasks. However, the interpretability of deep representations poses a vast challenge on real-world applications. To alleviate the challenge, a deep matrix factorization method with non-negativ...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2021-07-01
|
Series: | Frontiers in Neurorobotics |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fnbot.2021.701194/full |
id |
doaj-503d8c901f3c4fe68069364624bbb884 |
---|---|
record_format |
Article |
spelling |
doaj-503d8c901f3c4fe68069364624bbb8842021-07-20T09:48:41ZengFrontiers Media S.A.Frontiers in Neurorobotics1662-52182021-07-011510.3389/fnbot.2021.701194701194A Deep Non-negative Matrix Factorization Model for Big Data Representation LearningZhikui ChenShan JinRunze LiuJianing ZhangNowadays, deep representations have been attracting much attention owing to the great performance in various tasks. However, the interpretability of deep representations poses a vast challenge on real-world applications. To alleviate the challenge, a deep matrix factorization method with non-negative constraints is proposed to learn deep part-based representations of interpretability for big data in this paper. Specifically, a deep architecture with a supervisor network suppressing noise in data and a student network learning deep representations of interpretability is designed, which is an end-to-end framework for pattern mining. Furthermore, to train the deep matrix factorization architecture, an interpretability loss is defined, including a symmetric loss, an apposition loss, and a non-negative constraint loss, which can ensure the knowledge transfer from the supervisor network to the student network, enhancing the robustness of deep representations. Finally, extensive experimental results on two benchmark datasets demonstrate the superiority of the deep matrix factorization method.https://www.frontiersin.org/articles/10.3389/fnbot.2021.701194/fullnon-negative matrix factorizationdeep representation learningdenoising autoencoderinterpretabilitysupervisor network |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Zhikui Chen Shan Jin Runze Liu Jianing Zhang |
spellingShingle |
Zhikui Chen Shan Jin Runze Liu Jianing Zhang A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning Frontiers in Neurorobotics non-negative matrix factorization deep representation learning denoising autoencoder interpretability supervisor network |
author_facet |
Zhikui Chen Shan Jin Runze Liu Jianing Zhang |
author_sort |
Zhikui Chen |
title |
A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning |
title_short |
A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning |
title_full |
A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning |
title_fullStr |
A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning |
title_full_unstemmed |
A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning |
title_sort |
deep non-negative matrix factorization model for big data representation learning |
publisher |
Frontiers Media S.A. |
series |
Frontiers in Neurorobotics |
issn |
1662-5218 |
publishDate |
2021-07-01 |
description |
Nowadays, deep representations have been attracting much attention owing to the great performance in various tasks. However, the interpretability of deep representations poses a vast challenge on real-world applications. To alleviate the challenge, a deep matrix factorization method with non-negative constraints is proposed to learn deep part-based representations of interpretability for big data in this paper. Specifically, a deep architecture with a supervisor network suppressing noise in data and a student network learning deep representations of interpretability is designed, which is an end-to-end framework for pattern mining. Furthermore, to train the deep matrix factorization architecture, an interpretability loss is defined, including a symmetric loss, an apposition loss, and a non-negative constraint loss, which can ensure the knowledge transfer from the supervisor network to the student network, enhancing the robustness of deep representations. Finally, extensive experimental results on two benchmark datasets demonstrate the superiority of the deep matrix factorization method. |
topic |
non-negative matrix factorization deep representation learning denoising autoencoder interpretability supervisor network |
url |
https://www.frontiersin.org/articles/10.3389/fnbot.2021.701194/full |
work_keys_str_mv |
AT zhikuichen adeepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT shanjin adeepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT runzeliu adeepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT jianingzhang adeepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT zhikuichen deepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT shanjin deepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT runzeliu deepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning AT jianingzhang deepnonnegativematrixfactorizationmodelforbigdatarepresentationlearning |
_version_ |
1721293838432075776 |