An Adaptive Masked Attention Mechanism to Act on the Local Text in a Global Context for Aspect-based Sentiment Analysis

Aspect-based sentiment analysis (ABSA) is an important research area in natural language processing, which aims to analyze the sentiment polarity of the aspect terms present in the input sentences. In recent years, many models have focused on local text or local text-aspect relations by designing mo...

Full description

Bibliographic Details
Main Authors: Joe, I. (Author), Lin, T. (Author)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers Inc. 2023
Subjects:
Online Access:View Fulltext in Publisher
View in Scopus
LEADER 03038nam a2200301Ia 4500
001 10.1109-ACCESS.2023.3270927
008 230529s2023 CNT 000 0 und d
020 |a 21693536 (ISSN) 
245 1 0 |a An Adaptive Masked Attention Mechanism to Act on the Local Text in a Global Context for Aspect-based Sentiment Analysis 
260 0 |b Institute of Electrical and Electronics Engineers Inc.  |c 2023 
300 |a 1 
856 |z View Fulltext in Publisher  |u https://doi.org/10.1109/ACCESS.2023.3270927 
856 |z View in Scopus  |u https://www.scopus.com/inward/record.uri?eid=2-s2.0-85159692545&doi=10.1109%2fACCESS.2023.3270927&partnerID=40&md5=f97300cdd3004793b6bb482bf22afa2c 
520 3 |a Aspect-based sentiment analysis (ABSA) is an important research area in natural language processing, which aims to analyze the sentiment polarity of the aspect terms present in the input sentences. In recent years, many models have focused on local text or local text-aspect relations by designing models that act directly on the local text and then fusing features of the global text. In fact, this ignores the role of the global text. This paper first proposes a masked attention mechanism that acts on the local embedding part of the global embedding, based on the global attention mechanism. Previous models use two methods, called Context-features Dynamic Mask (CDM) and Context-features Dynamic Weighted (CDW), to assign weights to text vectors based on the distance to the aspect term, these methods avoid information redundancy. In this paper, the proposed method uses this masked attention mechanism to intercept the local embedding in the global embedding and then calculate the position in the dimension of the aspect term, reorder the weights corresponding to the position, and assign them to the global embedding according to the corresponding subscripts, in this way, the proposed model not only takes into account noise reduction but can also pay more attention to the feature information of the global text. Compared with the previous embedding using two pre-training models for local and global text, the model proposed in this paper can learn features of both global and local text with only one pre-training model, so it can also improve the training efficiency of the model. The proposed model achieves good results on a total of eight datasets, including the triple-classified and quadruple-classified datasets of laptops and restaurants in SemEval2014, the restaurant dataset in SemEval2016, and the Multi-Aspect Multi-Sentiment (MAMS) dataset. Author 
650 0 4 |a aspect-based sentiment analysis 
650 0 4 |a attention based context-featured dynamic inattention-based 
650 0 4 |a Context modeling 
650 0 4 |a Deep learning 
650 0 4 |a global context focus 
650 0 4 |a masked attention 
650 0 4 |a Neural networks 
650 0 4 |a Semantics 
650 0 4 |a Sentiment analysis 
650 0 4 |a Syntactics 
650 0 4 |a Task analysis 
700 1 0 |a Joe, I.  |e author 
700 1 0 |a Lin, T.  |e author 
773 |t IEEE Access