Local Feature Enhancement for Nested Entity Recognition Using a Convolutional Block Attention Module
Named entity recognition involves two main types: nested named entity recognition and flat named entity recognition. The span-based approach treats nested entities and flat entities uniformly by classifying entities on a span representation. However, the span-based approach ignores the local feature...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-08-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/13/16/9200 |
_version_ | 1797585615110275072 |
---|---|
author | Jinxin Deng Junbao Liu Xiaoqin Ma Xizhong Qin Zhenhong Jia |
author_facet | Jinxin Deng Junbao Liu Xiaoqin Ma Xizhong Qin Zhenhong Jia |
author_sort | Jinxin Deng |
collection | DOAJ |
description | Named entity recognition involves two main types: nested named entity recognition and flat named entity recognition. The span-based approach treats nested entities and flat entities uniformly by classifying entities on a span representation. However, the span-based approach ignores the local features within the entities and the relative position features between the head and tail tokens, which affects the performance of entity recognition. To address these issues, we propose a nested entity recognition model using a convolutional block attention module and rotary position embedding for local features and relative position features enhancement. Specifically, we apply rotary position embedding to the sentence representation and capture the semantic information between the head and tail tokens using a biaffine attention mechanism. Meanwhile, the convolution module captures the local features within the entity to generate the span representation. Finally, the two parts of the representation are fused for entity classification. Extensive experiments were conducted on five widely used benchmark datasets to demonstrate the effectiveness of our proposed model. |
first_indexed | 2024-03-11T00:09:35Z |
format | Article |
id | doaj.art-6d38fd5752434f32b09cb6ddf55185a0 |
institution | Directory Open Access Journal |
issn | 2076-3417 |
language | English |
last_indexed | 2024-03-11T00:09:35Z |
publishDate | 2023-08-01 |
publisher | MDPI AG |
record_format | Article |
series | Applied Sciences |
spelling | doaj.art-6d38fd5752434f32b09cb6ddf55185a02023-11-19T00:06:00ZengMDPI AGApplied Sciences2076-34172023-08-011316920010.3390/app13169200Local Feature Enhancement for Nested Entity Recognition Using a Convolutional Block Attention ModuleJinxin Deng0Junbao Liu1Xiaoqin Ma2Xizhong Qin3Zhenhong Jia4College of Information Science and Engineering, Xinjiang University, Urumqi 830049, ChinaCollege of Information Science and Engineering, Xinjiang University, Urumqi 830049, ChinaCollege of Information Science and Engineering, Xinjiang University, Urumqi 830049, ChinaCollege of Information Science and Engineering, Xinjiang University, Urumqi 830049, ChinaCollege of Information Science and Engineering, Xinjiang University, Urumqi 830049, ChinaNamed entity recognition involves two main types: nested named entity recognition and flat named entity recognition. The span-based approach treats nested entities and flat entities uniformly by classifying entities on a span representation. However, the span-based approach ignores the local features within the entities and the relative position features between the head and tail tokens, which affects the performance of entity recognition. To address these issues, we propose a nested entity recognition model using a convolutional block attention module and rotary position embedding for local features and relative position features enhancement. Specifically, we apply rotary position embedding to the sentence representation and capture the semantic information between the head and tail tokens using a biaffine attention mechanism. Meanwhile, the convolution module captures the local features within the entity to generate the span representation. Finally, the two parts of the representation are fused for entity classification. Extensive experiments were conducted on five widely used benchmark datasets to demonstrate the effectiveness of our proposed model.https://www.mdpi.com/2076-3417/13/16/9200nested entity recognitionconvolutional block attention modulerotary position embedding |
spellingShingle | Jinxin Deng Junbao Liu Xiaoqin Ma Xizhong Qin Zhenhong Jia Local Feature Enhancement for Nested Entity Recognition Using a Convolutional Block Attention Module Applied Sciences nested entity recognition convolutional block attention module rotary position embedding |
title | Local Feature Enhancement for Nested Entity Recognition Using a Convolutional Block Attention Module |
title_full | Local Feature Enhancement for Nested Entity Recognition Using a Convolutional Block Attention Module |
title_fullStr | Local Feature Enhancement for Nested Entity Recognition Using a Convolutional Block Attention Module |
title_full_unstemmed | Local Feature Enhancement for Nested Entity Recognition Using a Convolutional Block Attention Module |
title_short | Local Feature Enhancement for Nested Entity Recognition Using a Convolutional Block Attention Module |
title_sort | local feature enhancement for nested entity recognition using a convolutional block attention module |
topic | nested entity recognition convolutional block attention module rotary position embedding |
url | https://www.mdpi.com/2076-3417/13/16/9200 |
work_keys_str_mv | AT jinxindeng localfeatureenhancementfornestedentityrecognitionusingaconvolutionalblockattentionmodule AT junbaoliu localfeatureenhancementfornestedentityrecognitionusingaconvolutionalblockattentionmodule AT xiaoqinma localfeatureenhancementfornestedentityrecognitionusingaconvolutionalblockattentionmodule AT xizhongqin localfeatureenhancementfornestedentityrecognitionusingaconvolutionalblockattentionmodule AT zhenhongjia localfeatureenhancementfornestedentityrecognitionusingaconvolutionalblockattentionmodule |