A Bi-Directional GRU Architecture for the Self-Attention Mechanism: An Adaptable, Multi-Layered Approach with Blend of Word Embedding

Sentiment analysis (SA) has become an essential component of natural language processing (NLP) with numerous practical applications to understanding “what other people think”. Various techniques have been developed to tackle SA using deep learning (DL); however, current research lacks comprehensive...

Full description

Bibliographic Details
Main Authors: Amit Pimpalkar, Jeberson Retna Raj
Format: Article
Language:English
Published: Taiwan Association of Engineering and Technology Innovation 2023-07-01
Series:International Journal of Engineering and Technology Innovation
Subjects:
Online Access:https://ojs.imeti.org/index.php/IJETI/article/view/11510
_version_ 1797787791153692672
author Amit Pimpalkar
Jeberson Retna Raj
author_facet Amit Pimpalkar
Jeberson Retna Raj
author_sort Amit Pimpalkar
collection DOAJ
description Sentiment analysis (SA) has become an essential component of natural language processing (NLP) with numerous practical applications to understanding “what other people think”. Various techniques have been developed to tackle SA using deep learning (DL); however, current research lacks comprehensive strategies incorporating multiple-word embeddings. This study proposes a self-attention mechanism that leverages DL and involves the contextual integration of word embedding with a time-dispersed bidirectional gated recurrent unit (Bi-GRU). This work employs word embedding approaches GloVe, word2vec, and fastText to achieve better predictive capabilities. By integrating these techniques, the study aims to improve the classifier’s capability to precisely analyze and categorize sentiments in textual data from the domain of movies. The investigation seeks to enhance the classifier’s performance in NLP tasks by addressing the challenges of underfitting and overfitting in DL. To evaluate the model’s effectiveness, an openly available IMDb dataset was utilized, achieving a remarkable testing accuracy of 99.70%.
first_indexed 2024-03-13T01:27:47Z
format Article
id doaj.art-10656f64830e4585a2b3291c256137e1
institution Directory Open Access Journal
issn 2223-5329
2226-809X
language English
last_indexed 2024-03-13T01:27:47Z
publishDate 2023-07-01
publisher Taiwan Association of Engineering and Technology Innovation
record_format Article
series International Journal of Engineering and Technology Innovation
spelling doaj.art-10656f64830e4585a2b3291c256137e12023-07-04T11:10:30ZengTaiwan Association of Engineering and Technology InnovationInternational Journal of Engineering and Technology Innovation2223-53292226-809X2023-07-0113310.46604/ijeti.2023.11510A Bi-Directional GRU Architecture for the Self-Attention Mechanism: An Adaptable, Multi-Layered Approach with Blend of Word EmbeddingAmit Pimpalkar0Jeberson Retna Raj1School of Computing, Sathyabama Institute of Science and Technology, Chennai, India; Department of Computer Science and Engineering, Jhulelal Institute of Technology, Nagpur, IndiaSchool of Computing, Sathyabama Institute of Science and Technology, Chennai, India Sentiment analysis (SA) has become an essential component of natural language processing (NLP) with numerous practical applications to understanding “what other people think”. Various techniques have been developed to tackle SA using deep learning (DL); however, current research lacks comprehensive strategies incorporating multiple-word embeddings. This study proposes a self-attention mechanism that leverages DL and involves the contextual integration of word embedding with a time-dispersed bidirectional gated recurrent unit (Bi-GRU). This work employs word embedding approaches GloVe, word2vec, and fastText to achieve better predictive capabilities. By integrating these techniques, the study aims to improve the classifier’s capability to precisely analyze and categorize sentiments in textual data from the domain of movies. The investigation seeks to enhance the classifier’s performance in NLP tasks by addressing the challenges of underfitting and overfitting in DL. To evaluate the model’s effectiveness, an openly available IMDb dataset was utilized, achieving a remarkable testing accuracy of 99.70%. https://ojs.imeti.org/index.php/IJETI/article/view/11510Bi-directional GRUattention mechanismdeep learningnatural language processingword embedding
spellingShingle Amit Pimpalkar
Jeberson Retna Raj
A Bi-Directional GRU Architecture for the Self-Attention Mechanism: An Adaptable, Multi-Layered Approach with Blend of Word Embedding
International Journal of Engineering and Technology Innovation
Bi-directional GRU
attention mechanism
deep learning
natural language processing
word embedding
title A Bi-Directional GRU Architecture for the Self-Attention Mechanism: An Adaptable, Multi-Layered Approach with Blend of Word Embedding
title_full A Bi-Directional GRU Architecture for the Self-Attention Mechanism: An Adaptable, Multi-Layered Approach with Blend of Word Embedding
title_fullStr A Bi-Directional GRU Architecture for the Self-Attention Mechanism: An Adaptable, Multi-Layered Approach with Blend of Word Embedding
title_full_unstemmed A Bi-Directional GRU Architecture for the Self-Attention Mechanism: An Adaptable, Multi-Layered Approach with Blend of Word Embedding
title_short A Bi-Directional GRU Architecture for the Self-Attention Mechanism: An Adaptable, Multi-Layered Approach with Blend of Word Embedding
title_sort bi directional gru architecture for the self attention mechanism an adaptable multi layered approach with blend of word embedding
topic Bi-directional GRU
attention mechanism
deep learning
natural language processing
word embedding
url https://ojs.imeti.org/index.php/IJETI/article/view/11510
work_keys_str_mv AT amitpimpalkar abidirectionalgruarchitecturefortheselfattentionmechanismanadaptablemultilayeredapproachwithblendofwordembedding
AT jebersonretnaraj abidirectionalgruarchitecturefortheselfattentionmechanismanadaptablemultilayeredapproachwithblendofwordembedding
AT amitpimpalkar bidirectionalgruarchitecturefortheselfattentionmechanismanadaptablemultilayeredapproachwithblendofwordembedding
AT jebersonretnaraj bidirectionalgruarchitecturefortheselfattentionmechanismanadaptablemultilayeredapproachwithblendofwordembedding