An enhanced gated recurrent unit with auto-encoder for solving text classification problems
Classification has become an important task for categorizing documents automatically based on their respective groups. Gated Recurrent Unit (GRU) is a type of Recurrent Neural Networks (RNNs), and a deep learning algorithm that contains update gate and reset gate. It is considered as one of the m...
Main Author: | |
---|---|
Format: | Thesis |
Language: | English English English |
Published: |
2020
|
Subjects: | |
Online Access: | http://eprints.uthm.edu.my/4928/1/24p%20MUHAMMAD%20ZULQARNAIN.pdf http://eprints.uthm.edu.my/4928/2/MUHAMMAD%20ZULQARNAIN%20COPYRIGHT%20DECLARATION.pdf http://eprints.uthm.edu.my/4928/3/MUHAMMAD%20ZULQARNAIN%20WATERMARK.pdf |
Summary: | Classification has become an important task for categorizing documents
automatically based on their respective groups. Gated Recurrent Unit (GRU) is a
type of Recurrent Neural Networks (RNNs), and a deep learning algorithm that
contains update gate and reset gate. It is considered as one of the most efficient text
classification techniques, specifically on sequential datasets. However, GRU suffered
from three major issues when it is applied for solving the text classification
problems. The first drawback is the failure in data dimensionality reduction, which
leads to low quality solution for the classification problems. Secondly, GRU still has
difficulty in training procedure due to redundancy between update and reset gates.
The reset gate creates complexity and require high processing time. Thirdly, GRU
also has a problem with informative features loss in each recurrence during the
training phase and high computational cost. The reason behind this failure is due to a
random selection of features from datasets (or previous outputs), when applied in its
standard form. Therefore, in this research, a new model namely Encoder Simplified
GRU (ES-GRU) is proposed to reduce dimension of data using an Auto-Encoder
(AE). Accordingly, the reset gate is replaced with an update gate in order to reduce
the redundancy and complexity in the standard GRU. Finally, a Batch Normalization
method is incorporated in the GRU and AE for improving the performance of the
proposed ES-GRU model. The proposed model has been evaluated on seven
benchmark text datasets and compared with six baselines well-known multiclass text
classification approaches included standard GRU, AE, Long Short Term Memory,
Convolutional Neural Network, Support Vector Machine, and Naïve Bayes. Based
on various types of performance evaluation parameters, a considerable amount of
improvement has been observed in the performance of the proposed model as
compared to other standard classification techniques, and showed better effectiveness
and efficiency of the developed model. |
---|