MTBERT-Attention: An Explainable BERT Model based on Multi-Task Learning for Cognitive Text Classification

In recent years, there has been a lot of focus on Bloom’s taxonomy-based classification of E-Learning materials. Researchers employ different methods and features. In our previous works, we have dealt with this problem via different techniques and algorithms. we started by boosting traditional machi...

Full description

Bibliographic Details
Main Authors: Hanane Sebbaq, Nour-eddine El Faddouli
Format: Article
Language:English
Published: Elsevier 2023-09-01
Series:Scientific African
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2468227623002557
Description
Summary:In recent years, there has been a lot of focus on Bloom’s taxonomy-based classification of E-Learning materials. Researchers employ different methods and features. In our previous works, we have dealt with this problem via different techniques and algorithms. we started by boosting traditional machine learning algorithms then we moved on to deep learning algorithms by proposing a bidirectional GRU (BI-GRU) model combined with word2vec, finally we also proposed a fine-tuned model of bidirectional encoder representations from Transformers (BERT). The limitations of our model’s performance as well as the lack of an annotated dataset lead us to explore a novel approach to the cognitive classification of text. First, we propose MTBERT-Attention, a unique and explainable model based on multi-task learning (MTL), BERT, and the co-attention mechanism. MTL enhances our primary task’s generalization capacity and permits data augmentation. The usage of BERT as a stack for knowledge transfer between activities has been improved. The co-attention mechanism put particular emphasis on the significant aspects of the learning objective. We propose an explainability framework based on the attention mechanism. Lastly, we carry out in-depth tests to assess the viability and efficiency of the suggested model and the explainability Framework. Our proposed model outperforms the baseline models for loss, F1-score, and accuracy. It attained an overall classification accuracy of 97.71% with the test set and effectively classifies learning objectives that use unclear action verbs from Bloom’s taxonomy. On the other hand, to prove the performance of our explainability Framework, we conduct a qualitative and quantitative study on the quality of the explanations as well as on the computational cost. We adopt Local Interpretable Model-agnostic Explanations (LIME) as a baseline for comparison. Our experiments show that our proposed approach for explainability outperforms the LIME explainer in terms of fidelity and computational cost.
ISSN:2468-2276