An Adaptive Masked Attention Mechanism to Act on the Local Text in a Global Context for Aspect-based Sentiment Analysis

Aspect-based sentiment analysis (ABSA) is an important research area in natural language processing, which aims to analyze the sentiment polarity of the aspect terms present in the input sentences. In recent years, many models have focused on local text or local text-aspect relations by designing mo...

Full description

Bibliographic Details
Main Authors: Joe, I. (Author), Lin, T. (Author)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers Inc. 2023
Subjects:
Online Access:View Fulltext in Publisher
View in Scopus
Description
Summary:Aspect-based sentiment analysis (ABSA) is an important research area in natural language processing, which aims to analyze the sentiment polarity of the aspect terms present in the input sentences. In recent years, many models have focused on local text or local text-aspect relations by designing models that act directly on the local text and then fusing features of the global text. In fact, this ignores the role of the global text. This paper first proposes a masked attention mechanism that acts on the local embedding part of the global embedding, based on the global attention mechanism. Previous models use two methods, called Context-features Dynamic Mask (CDM) and Context-features Dynamic Weighted (CDW), to assign weights to text vectors based on the distance to the aspect term, these methods avoid information redundancy. In this paper, the proposed method uses this masked attention mechanism to intercept the local embedding in the global embedding and then calculate the position in the dimension of the aspect term, reorder the weights corresponding to the position, and assign them to the global embedding according to the corresponding subscripts, in this way, the proposed model not only takes into account noise reduction but can also pay more attention to the feature information of the global text. Compared with the previous embedding using two pre-training models for local and global text, the model proposed in this paper can learn features of both global and local text with only one pre-training model, so it can also improve the training efficiency of the model. The proposed model achieves good results on a total of eight datasets, including the triple-classified and quadruple-classified datasets of laptops and restaurants in SemEval2014, the restaurant dataset in SemEval2016, and the Multi-Aspect Multi-Sentiment (MAMS) dataset. Author
Physical Description:1
ISBN:21693536 (ISSN)
DOI:10.1109/ACCESS.2023.3270927