Impact of pre-training on background knowledge and societal bias
<p>With appropriate pre-training on unstructured text, larger and more accurate neural network models can be trained. Unfortunately, unstructured pre-training data may contain undesired societal biases, which a model may mimic and amplify. This thesis focuses on both improving unsuperv...
Main Author: | Kocijan, V |
---|---|
Other Authors: | Lukasiewicz, T |
Format: | Thesis |
Language: | English |
Published: |
2021
|
Subjects: |
Similar Items
-
Review of Knowledge-Enhanced Pre-trained Language Models
by: HAN Yi, QIAO Linbo, LI Dongsheng, LIAO Xiangke
Published: (2022-07-01) -
Simple Knowledge Graph Completion Model Based on Differential Negative Sampling and Prompt Learning
by: Li Duan, et al.
Published: (2023-08-01) -
MEM-KGC: Masked Entity Model for Knowledge Graph Completion With Pre-Trained Language Model
by: Bonggeun Choi, et al.
Published: (2021-01-01) -
Knowledge Graph Completion Algorithm Based on Probabilistic Fuzzy Information Aggregation and Natural Language Processing Technology
by: Canlin Zhang, et al.
Published: (2022-12-01) -
Improving FMEA Comprehensibility via Common-Sense Knowledge Graph Completion Techniques
by: Houssam Razouk, et al.
Published: (2023-01-01)