Text Classification Based on Conditional Reflection
Text classification is an essential task in many natural language processing (NLP) applications; we know each sentence may have only a few words that play an important role in text classification, while other words have no significant effect on the classification results. Finding these keywords has...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2019-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8734068/ |
id |
doaj-02a7ee5a8ad441afaeb692cffccca742 |
---|---|
record_format |
Article |
spelling |
doaj-02a7ee5a8ad441afaeb692cffccca7422021-03-29T23:03:12ZengIEEEIEEE Access2169-35362019-01-017767127671910.1109/ACCESS.2019.29219768734068Text Classification Based on Conditional ReflectionYanliang Jin0https://orcid.org/0000-0001-9836-8249Can Luo1https://orcid.org/0000-0002-1424-3930Weisi Guo2https://orcid.org/0000-0003-3524-3953Jinfei Xie3https://orcid.org/0000-0001-7283-8564Dijia Wu4https://orcid.org/0000-0001-9708-9969Rui Wang5https://orcid.org/0000-0002-7974-9510Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, ChinaKey Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, ChinaSchool of Engineering, University of Warwick, Coventry, U.K.Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, ChinaKey Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, ChinaKey Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, ChinaText classification is an essential task in many natural language processing (NLP) applications; we know each sentence may have only a few words that play an important role in text classification, while other words have no significant effect on the classification results. Finding these keywords has an important impact on the classification accuracy. In this paper, we propose a network model, named RCNNA, recurrent convolution neural networks with attention (RCNNA), which models on the human conditional reflexes for text classification. The model combines bidirectional LSTM (BLSTM), attention mechanism, and convolutional neural networks (CNNs) as the receptors, nerve centers, and effectors in the reflex arc, respecctively. The receptors get the context information through BLSTM, the nerve centers get the important information of the sentence through the attention mechanism, and the effectors capture more key information by CNN. Finally, the model outputs the classification result by the softmax function. We test our NLP algorithm on four datasets containing Chinese and English for text classification, including a comparison of random initialization word vectors and pre-training word vectors. The experiments show that the RCNNA achieves the best performance by comparing with the state-of-the-art baseline methods.https://ieeexplore.ieee.org/document/8734068/Attention mechanismbidirectional LSTMconvolutional neural networksconditional reflectiontext classification |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Yanliang Jin Can Luo Weisi Guo Jinfei Xie Dijia Wu Rui Wang |
spellingShingle |
Yanliang Jin Can Luo Weisi Guo Jinfei Xie Dijia Wu Rui Wang Text Classification Based on Conditional Reflection IEEE Access Attention mechanism bidirectional LSTM convolutional neural networks conditional reflection text classification |
author_facet |
Yanliang Jin Can Luo Weisi Guo Jinfei Xie Dijia Wu Rui Wang |
author_sort |
Yanliang Jin |
title |
Text Classification Based on Conditional Reflection |
title_short |
Text Classification Based on Conditional Reflection |
title_full |
Text Classification Based on Conditional Reflection |
title_fullStr |
Text Classification Based on Conditional Reflection |
title_full_unstemmed |
Text Classification Based on Conditional Reflection |
title_sort |
text classification based on conditional reflection |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2019-01-01 |
description |
Text classification is an essential task in many natural language processing (NLP) applications; we know each sentence may have only a few words that play an important role in text classification, while other words have no significant effect on the classification results. Finding these keywords has an important impact on the classification accuracy. In this paper, we propose a network model, named RCNNA, recurrent convolution neural networks with attention (RCNNA), which models on the human conditional reflexes for text classification. The model combines bidirectional LSTM (BLSTM), attention mechanism, and convolutional neural networks (CNNs) as the receptors, nerve centers, and effectors in the reflex arc, respecctively. The receptors get the context information through BLSTM, the nerve centers get the important information of the sentence through the attention mechanism, and the effectors capture more key information by CNN. Finally, the model outputs the classification result by the softmax function. We test our NLP algorithm on four datasets containing Chinese and English for text classification, including a comparison of random initialization word vectors and pre-training word vectors. The experiments show that the RCNNA achieves the best performance by comparing with the state-of-the-art baseline methods. |
topic |
Attention mechanism bidirectional LSTM convolutional neural networks conditional reflection text classification |
url |
https://ieeexplore.ieee.org/document/8734068/ |
work_keys_str_mv |
AT yanliangjin textclassificationbasedonconditionalreflection AT canluo textclassificationbasedonconditionalreflection AT weisiguo textclassificationbasedonconditionalreflection AT jinfeixie textclassificationbasedonconditionalreflection AT dijiawu textclassificationbasedonconditionalreflection AT ruiwang textclassificationbasedonconditionalreflection |
_version_ |
1724190171612577792 |