Training deep network models for accurate recognition of texts in scenes

Text recognition in scenes has always been a popular research field in computer vision and even natural language processing because of the large application spectrum, including automatic document scanning and license plate recognition. Recently, deep learning-based approaches have caught significant...

Full description

Bibliographic Details
Main Author: Sui, Lulu
Other Authors: Lu Shijian
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166170
Description
Summary:Text recognition in scenes has always been a popular research field in computer vision and even natural language processing because of the large application spectrum, including automatic document scanning and license plate recognition. Recently, deep learning-based approaches have caught significant attention for their impressive results on various benchmark datasets like IIIT5K, ICDAR 2013 and ICDAR 2015. The attractive results are accomplished through techniques such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and attention mechanisms. However, many challenges exist in this field, such as high variability of texts (orientation, lighting, low resolution) and a trade-off between accuracy and speed. Moreover, state-of-the-art (SOTA) models usually require high computational resources and hardware requirements to efficiently deploy the models. Therefore, this research aims to address these challenges, reproduce and enhance existing scene text recognition models by fine-tuning current designs. To address these challenges, we propose two key goals: my primary goal is to reproduce ABINet and SAR models. The ABINet model encodes linguistic knowledge into itself to achieve SOTA performance while the SAR model is relatively light and easy to train and deploy with good performance compared to many other models. This process will build a solid foundation for further enhancements. My next step is to fine-tune the models through hyperparameter tuning by systematically testing various learning rates. This approach will help to find the optimal combination of hyperparameters that gives the best performance, fastest training and lowest overfitting issue. We utilized six benchmarking test datasets to show the models’ performance and the outcome of hyperparameter tuning. After extensive tuning experiments, we set the optimal learning rate at 0.0001 for the SAR model. Our results demonstrate superior performance with ABINet achieving 80.1% accuracy and SAR achieving 84.6% accuracy.