Shortcut Learning Explanations for Deep Natural Language Processing: A Survey on Dataset Biases
The introduction of pre-trained large language models (LLMs) has transformed NLP by fine-tuning task-specific datasets, enabling notable advancements in news classification, language translation, and sentiment analysis. This has revolutionized the field, driving remarkable breakthroughs and progress...
Main Authors: | Varun Dogra, Sahil Verma, Kavita, Marcin Wozniak, Jana Shafi, Muhammad Fazal Ijaz |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2024-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10416838/ |
Similar Items
-
Drop the shortcuts: image augmentation improves fairness and decreases AI detection of race and other demographics from medical imagesResearch in context
by: Ryan Wang, et al.
Published: (2024-04-01) -
Assessing Biases through Visual Contexts
by: Anna Arias-Duart, et al.
Published: (2023-07-01) -
Uncovering and Correcting Shortcut Learning in Machine Learning Models for Skin Cancer Diagnosis
by: Meike Nauta, et al.
Published: (2021-12-01) -
Avoiding Shortcut-Learning by Mutual Information Minimization in Deep Learning-Based Image Processing
by: Louisa Fay, et al.
Published: (2023-01-01) -
Achieving Multisite Generalization for CNN-Based Disease Diagnosis Models by Mitigating Shortcut Learning
by: Kaoutar Ben Ahmed, et al.
Published: (2022-01-01)