A pre-trained BERT for Korean medical natural language processing
Abstract With advances in deep learning and natural language processing (NLP), the analysis of medical texts is becoming increasingly important. Nonetheless, despite the importance of processing medical texts, no research on Korean medical-specific language models has been conducted. The Korean medi...
Main Authors: | Yoojoong Kim, Jong-Ho Kim, Jeong Moon Lee, Moon Joung Jang, Yun Jin Yum, Seongtae Kim, Unsub Shin, Young-Min Kim, Hyung Joon Joo, Sanghoun Song |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2022-08-01
|
Series: | Scientific Reports |
Online Access: | https://doi.org/10.1038/s41598-022-17806-8 |
Similar Items
-
Author Correction: A pre-trained BERT for Korean medical natural language processing
by: Yoojoong Kim, et al.
Published: (2023-06-01) -
A Word Pair Dataset for Semantic Similarity and Relatedness in Korean Medical Vocabulary: Reference Development and Validation
by: Yunjin Yum, et al.
Published: (2021-06-01) -
Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing
by: Unsub Shin, et al.
Published: (2023-02-01) -
Zero‐anaphora resolution in Korean based on deep language representation model: BERT
by: Youngtae Kim, et al.
Published: (2020-10-01) -
Standardized Database of 12-Lead Electrocardiograms with a Common Standard for the Promotion of Cardiovascular Research: KURIAS-ECG
by: Hakje Yoo, et al.
Published: (2023-04-01)