Optimizing Small BERTs Trained for German NER
Currently, the most widespread neural network architecture for training language models is the so-called BERT, which led to improvements in various Natural Language Processing (NLP) tasks. In general, the larger the number of parameters in a BERT model, the better the results obtained in these NLP t...
Main Authors: | Jochen Zöllner, Konrad Sperfeld, Christoph Wick, Roger Labahn |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-10-01
|
Series: | Information |
Subjects: | |
Online Access: | https://www.mdpi.com/2078-2489/12/11/443 |
Similar Items
-
MenuNER: Domain-Adapted BERT Based NER Approach for a Domain with Limited Dataset and Its Application to Food Menu Domain
by: Muzamil Hussain Syed, et al.
Published: (2021-06-01) -
Web Interface of NER and RE with BERT for Biomedical Text Mining
by: Yeon-Ji Park, et al.
Published: (2023-04-01) -
BERT-Based Transfer-Learning Approach for Nested Named-Entity Recognition Using Joint Labeling
by: Ankit Agrawal, et al.
Published: (2022-01-01) -
Prompt-Based Word-Level Information Injection BERT for Chinese Named Entity Recognition
by: Qiang He, et al.
Published: (2023-03-01) -
Deep learning with language models improves named entity recognition for PharmaCoNER
by: Cong Sun, et al.
Published: (2021-12-01)