Lexically Aware Semi-Supervised Learning for OCR Post-Correction
AbstractMuch of the existing linguistic data in many languages of the world is locked away in non- digitized books and documents. Optical character recognition (OCR) can be used to produce digitized text, and previous work has demonstrated the utility of neural post-correction method...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
The MIT Press
2021-01-01
|
Series: | Transactions of the Association for Computational Linguistics |
Online Access: | https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00427/108475/Lexically-Aware-Semi-Supervised-Learning-for-OCR |
_version_ | 1818539283686883328 |
---|---|
author | Shruti Rijhwani Daisy Rosenblum Antonios Anastasopoulos Graham Neubig |
author_facet | Shruti Rijhwani Daisy Rosenblum Antonios Anastasopoulos Graham Neubig |
author_sort | Shruti Rijhwani |
collection | DOAJ |
description |
AbstractMuch of the existing linguistic data in many languages of the world is locked away in non- digitized books and documents. Optical character recognition (OCR) can be used to produce digitized text, and previous work has demonstrated the utility of neural post-correction methods that improve the results of general- purpose OCR systems on recognition of less- well-resourced languages. However, these methods rely on manually curated post- correction data, which are relatively scarce compared to the non-annotated raw images that need to be digitized. In this paper, we present a semi-supervised learning method that makes it possible to utilize these raw images to improve performance, specifically through the use of self-training, a technique where a model is iteratively trained on its own outputs. In addition, to enforce consistency in the recognized vocabulary, we introduce a lexically aware decoding method that augments the neural post-correction model with a count-based language model constructed from the recognized texts, implemented using weighted finite-state automata (WFSA) for efficient and effective decoding. Results on four endangered languages demonstrate the utility of the proposed method, with relative error reductions of 15%–29%, where we find the combination of self-training and lexically aware decoding essential for achieving consistent improvements.1 |
first_indexed | 2024-12-11T21:40:00Z |
format | Article |
id | doaj.art-36c3cbb49dc746b0a2b9638c326f28c0 |
institution | Directory Open Access Journal |
issn | 2307-387X |
language | English |
last_indexed | 2024-12-11T21:40:00Z |
publishDate | 2021-01-01 |
publisher | The MIT Press |
record_format | Article |
series | Transactions of the Association for Computational Linguistics |
spelling | doaj.art-36c3cbb49dc746b0a2b9638c326f28c02022-12-22T00:49:52ZengThe MIT PressTransactions of the Association for Computational Linguistics2307-387X2021-01-0191285130210.1162/tacl_a_00427Lexically Aware Semi-Supervised Learning for OCR Post-CorrectionShruti Rijhwani0Daisy Rosenblum1Antonios Anastasopoulos2Graham Neubig3Language Technologies Institute, Carnegie Mellon University, USA. srijhwan@cs.cmu.eduUniversity of British Columbia, Canada. daisy.rosenblum@ubc.caDepartment of Computer Science, George Mason University, USA. antonis@gmu.eduLanguage Technologies Institute, Carnegie Mellon University, USA. gneubig@cs.cmu.edu AbstractMuch of the existing linguistic data in many languages of the world is locked away in non- digitized books and documents. Optical character recognition (OCR) can be used to produce digitized text, and previous work has demonstrated the utility of neural post-correction methods that improve the results of general- purpose OCR systems on recognition of less- well-resourced languages. However, these methods rely on manually curated post- correction data, which are relatively scarce compared to the non-annotated raw images that need to be digitized. In this paper, we present a semi-supervised learning method that makes it possible to utilize these raw images to improve performance, specifically through the use of self-training, a technique where a model is iteratively trained on its own outputs. In addition, to enforce consistency in the recognized vocabulary, we introduce a lexically aware decoding method that augments the neural post-correction model with a count-based language model constructed from the recognized texts, implemented using weighted finite-state automata (WFSA) for efficient and effective decoding. Results on four endangered languages demonstrate the utility of the proposed method, with relative error reductions of 15%–29%, where we find the combination of self-training and lexically aware decoding essential for achieving consistent improvements.1https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00427/108475/Lexically-Aware-Semi-Supervised-Learning-for-OCR |
spellingShingle | Shruti Rijhwani Daisy Rosenblum Antonios Anastasopoulos Graham Neubig Lexically Aware Semi-Supervised Learning for OCR Post-Correction Transactions of the Association for Computational Linguistics |
title | Lexically Aware Semi-Supervised Learning for OCR Post-Correction |
title_full | Lexically Aware Semi-Supervised Learning for OCR Post-Correction |
title_fullStr | Lexically Aware Semi-Supervised Learning for OCR Post-Correction |
title_full_unstemmed | Lexically Aware Semi-Supervised Learning for OCR Post-Correction |
title_short | Lexically Aware Semi-Supervised Learning for OCR Post-Correction |
title_sort | lexically aware semi supervised learning for ocr post correction |
url | https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00427/108475/Lexically-Aware-Semi-Supervised-Learning-for-OCR |
work_keys_str_mv | AT shrutirijhwani lexicallyawaresemisupervisedlearningforocrpostcorrection AT daisyrosenblum lexicallyawaresemisupervisedlearningforocrpostcorrection AT antoniosanastasopoulos lexicallyawaresemisupervisedlearningforocrpostcorrection AT grahamneubig lexicallyawaresemisupervisedlearningforocrpostcorrection |