Transformer Based Language Identification for Malayalam-English Code-Mixed Text
Social media users have the proclivity to write majority of the data for under resourced languages in code-mixed format. Code-mixing is defined as mixing of two or more languages in a single sentence. Research in code-mixed text helps apprehend security threats, prevalent on social media platforms....
Main Authors: | S. Thara, Prabaharan Poornachandran |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9511454/ |
Similar Items
-
Study on Predicting Psychological Traits of Online Text by BERT
by: ZHANG Han, JIA Tianyuan, LUO Fang, ZHANG Sheng, WU Xia
Published: (2021-08-01) -
Zero‐anaphora resolution in Korean based on deep language representation model: BERT
by: Youngtae Kim, et al.
Published: (2020-10-01) -
Automatic detection of actionable radiology reports using bidirectional encoder representations from transformers
by: Yuta Nakamura, et al.
Published: (2021-09-01) -
Deep Entity Linking via Eliminating Semantic Ambiguity With BERT
by: Xiaoyao Yin, et al.
Published: (2019-01-01) -
Improving BERT-Based Text Classification With Auxiliary Sentence and Domain Knowledge
by: Shanshan Yu, et al.
Published: (2019-01-01)