Korean Semantic Role Labeling with Bidirectional Encoder Representations from Transformers and Simple Semantic Information

State-of-the-art semantic role labeling (SRL) performance has been achieved using neural network models by incorporating syntactic feature information such as dependency trees. In recent years, breakthroughs achieved using end-to-end neural network models have resulted in a state-of-the-art SRL perf...

Full description

Bibliographic Details
Main Authors: Jangseong Bae, Changki Lee
Format: Article
Language:English
Published: MDPI AG 2022-06-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/12/12/5995
Description
Summary:State-of-the-art semantic role labeling (SRL) performance has been achieved using neural network models by incorporating syntactic feature information such as dependency trees. In recent years, breakthroughs achieved using end-to-end neural network models have resulted in a state-of-the-art SRL performance even without syntactic features. With the advent of a language model called bidirectional encoder representations from transformers (BERT), another breakthrough was witnessed. Even though the semantic information of each word constituting a sentence is important in determining the meaning of a word, previous studies regarding the end-to-end neural network method did not utilize semantic information. In this study, we propose a BERT-based SRL model that uses simple semantic information without syntactic feature information. To obtain the latter, we used PropBank, which described the relational information between predicates and arguments. In addition, text-originated feature information obtained from the training text data was utilized. Our proposed model achieved state-of-the-art results on both Korean PropBank and CoNLL-2009 English benchmarks.
ISSN:2076-3417