Sentiment Analysis Using Pre-Trained Language Model With No Fine-Tuning and Less Resource

Sentiment analysis has become popular when Natural Language Processing algorithms were proven to be able to process complex sentences with good accuracy. Recently, pre-trained language models such as BERT and mBERT, have been shown to be effective for improving language tasks. Most of the work in im...

Full description

Bibliographic Details
Main Authors: Yuheng Kit, Musa Mohd Mokji
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9912410/
Description
Summary:Sentiment analysis has become popular when Natural Language Processing algorithms were proven to be able to process complex sentences with good accuracy. Recently, pre-trained language models such as BERT and mBERT, have been shown to be effective for improving language tasks. Most of the work in implementing the models focuses on fine-tuning BERT to achieve desirable results. However, this approach is resource-intensive and requires a long training time, up to a few hours on a GPU, depending on the dataset. Hence, this paper proposes a less complex system with less training time using the BERT model without the fine-tuning process and adopting a feature reduction algorithm to reduce sentence embeddings. The experimental results show that with 50% fewer sentence embeddings, the proposed system improves the accuracy by 1-2% with 71% less training time and 89% less memory usage. The proposed approach has also been proven to work for multilingual tasks by using a single mBERT model.
ISSN:2169-3536