Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT

Abstract Background The abundance of biomedical text data coupled with advances in natural language processing (NLP) is resulting in novel biomedical NLP (BioNLP) applications. These NLP applications, or tasks, are reliant on the availability of domain-specific language models (LMs) that are trained...

Full description

Bibliographic Details
Main Authors: Usman Naseem, Adam G. Dunn, Matloob Khushi, Jinman Kim
Format: Article
Language:English
Published: BMC 2022-04-01
Series:BMC Bioinformatics
Subjects:
Online Access:https://doi.org/10.1186/s12859-022-04688-w
_version_ 1818278957208829952
author Usman Naseem
Adam G. Dunn
Matloob Khushi
Jinman Kim
author_facet Usman Naseem
Adam G. Dunn
Matloob Khushi
Jinman Kim
author_sort Usman Naseem
collection DOAJ
description Abstract Background The abundance of biomedical text data coupled with advances in natural language processing (NLP) is resulting in novel biomedical NLP (BioNLP) applications. These NLP applications, or tasks, are reliant on the availability of domain-specific language models (LMs) that are trained on a massive amount of data. Most of the existing domain-specific LMs adopted bidirectional encoder representations from transformers (BERT) architecture which has limitations, and their generalizability is unproven as there is an absence of baseline results among common BioNLP tasks. Results We present 8 variants of BioALBERT, a domain-specific adaptation of a lite bidirectional encoder representations from transformers (ALBERT), trained on biomedical (PubMed and PubMed Central) and clinical (MIMIC-III) corpora and fine-tuned for 6 different tasks across 20 benchmark datasets. Experiments show that a large variant of BioALBERT trained on PubMed outperforms the state-of-the-art on named-entity recognition (+ 11.09% BLURB score improvement), relation extraction (+ 0.80% BLURB score), sentence similarity (+ 1.05% BLURB score), document classification (+ 0.62% F1-score), and question answering (+ 2.83% BLURB score). It represents a new state-of-the-art in 5 out of 6 benchmark BioNLP tasks. Conclusions The large variant of BioALBERT trained on PubMed achieved a higher BLURB score than previous state-of-the-art models on 5 of the 6 benchmark BioNLP tasks. Depending on the task, 5 different variants of BioALBERT outperformed previous state-of-the-art models on 17 of the 20 benchmark datasets, showing that our model is robust and generalizable in the common BioNLP tasks. We have made BioALBERT freely available which will help the BioNLP community avoid computational cost of training and establish a new set of baselines for future efforts across a broad range of BioNLP tasks.
first_indexed 2024-12-12T23:25:41Z
format Article
id doaj.art-3835d5c0b8f741c8843e4700129271b7
institution Directory Open Access Journal
issn 1471-2105
language English
last_indexed 2024-12-12T23:25:41Z
publishDate 2022-04-01
publisher BMC
record_format Article
series BMC Bioinformatics
spelling doaj.art-3835d5c0b8f741c8843e4700129271b72022-12-22T00:08:04ZengBMCBMC Bioinformatics1471-21052022-04-0123111510.1186/s12859-022-04688-wBenchmarking for biomedical natural language processing tasks with a domain specific ALBERTUsman Naseem0Adam G. Dunn1Matloob Khushi2Jinman Kim3School of Computer Science, The University of SydneyBiomedical Informatics and Digital Health and Faculty of Medicine and Health, School of Medical Sciences, The University of SydneySchool of Computer Science, The University of SydneySchool of Computer Science, The University of SydneyAbstract Background The abundance of biomedical text data coupled with advances in natural language processing (NLP) is resulting in novel biomedical NLP (BioNLP) applications. These NLP applications, or tasks, are reliant on the availability of domain-specific language models (LMs) that are trained on a massive amount of data. Most of the existing domain-specific LMs adopted bidirectional encoder representations from transformers (BERT) architecture which has limitations, and their generalizability is unproven as there is an absence of baseline results among common BioNLP tasks. Results We present 8 variants of BioALBERT, a domain-specific adaptation of a lite bidirectional encoder representations from transformers (ALBERT), trained on biomedical (PubMed and PubMed Central) and clinical (MIMIC-III) corpora and fine-tuned for 6 different tasks across 20 benchmark datasets. Experiments show that a large variant of BioALBERT trained on PubMed outperforms the state-of-the-art on named-entity recognition (+ 11.09% BLURB score improvement), relation extraction (+ 0.80% BLURB score), sentence similarity (+ 1.05% BLURB score), document classification (+ 0.62% F1-score), and question answering (+ 2.83% BLURB score). It represents a new state-of-the-art in 5 out of 6 benchmark BioNLP tasks. Conclusions The large variant of BioALBERT trained on PubMed achieved a higher BLURB score than previous state-of-the-art models on 5 of the 6 benchmark BioNLP tasks. Depending on the task, 5 different variants of BioALBERT outperformed previous state-of-the-art models on 17 of the 20 benchmark datasets, showing that our model is robust and generalizable in the common BioNLP tasks. We have made BioALBERT freely available which will help the BioNLP community avoid computational cost of training and establish a new set of baselines for future efforts across a broad range of BioNLP tasks.https://doi.org/10.1186/s12859-022-04688-wBioinformaticsBiomedical text miningBioNLPDomain-specific language model
spellingShingle Usman Naseem
Adam G. Dunn
Matloob Khushi
Jinman Kim
Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT
BMC Bioinformatics
Bioinformatics
Biomedical text mining
BioNLP
Domain-specific language model
title Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT
title_full Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT
title_fullStr Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT
title_full_unstemmed Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT
title_short Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT
title_sort benchmarking for biomedical natural language processing tasks with a domain specific albert
topic Bioinformatics
Biomedical text mining
BioNLP
Domain-specific language model
url https://doi.org/10.1186/s12859-022-04688-w
work_keys_str_mv AT usmannaseem benchmarkingforbiomedicalnaturallanguageprocessingtaskswithadomainspecificalbert
AT adamgdunn benchmarkingforbiomedicalnaturallanguageprocessingtaskswithadomainspecificalbert
AT matloobkhushi benchmarkingforbiomedicalnaturallanguageprocessingtaskswithadomainspecificalbert
AT jinmankim benchmarkingforbiomedicalnaturallanguageprocessingtaskswithadomainspecificalbert