Summary: | When a digital collection has been processed by OCR, the usability expectations
of patrons and researchers are high. While the former expect full text
search to return all instances of terms in historical collections correctly, the latter
are more familiar with the impacts of OCR errors but would still like to
apply big data analysis or machine-learning methods. All of these use cases
depend on high quality textual transcriptions of the scans. This is why the
National Library of Luxembourg (BnL) has developed a pipeline to improve
OCR for existing digitised documents. Enhancing OCR in a digital library not
only demands improved machine learning models, but also requires a coherent
reprocessing strategy in order to apply them efficiently in production systems. The newly developed software tool, Nautilus, fulfils these requirements using
METS/ALTO as a pivot format. The BnL has open-sourced it so that other
libraries can re-use it on their own collections. This paper covers the creation
of the ground truth, the details of the reprocessing pipeline, its production use
on the entirety of the BnL collection, along with the estimated results. Based
on a quality prediction measure, developed during the project, approximately
28 million additional text lines now exceed the quality threshold.
|