Explainable artificial intelligence models for enhancing classification reliability of ground weapon systems
This study focused on the development of a reliable artificial intelligence (AI) model to enhance the classification reliability of ground weapon systems for surveillance and reconnaissance applications. The proposed AI model overcomes the limited data availability of military objects such as tanks...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Institute of Defense Acquisition Program
2023-12-01
|
Series: | 선진국방연구 |
Subjects: | |
Online Access: | https://150.95.154.243/index.php/JAMS/article/view/216 |
_version_ | 1797361022929993728 |
---|---|
author | Gimin Bae Janghyong Lee |
author_facet | Gimin Bae Janghyong Lee |
author_sort | Gimin Bae |
collection | DOAJ |
description |
This study focused on the development of a reliable artificial intelligence (AI) model to enhance the classification reliability of ground weapon systems for surveillance and reconnaissance applications. The proposed AI model overcomes the limited data availability of military objects such as tanks, canons, and multiple-launch rockets by leveraging transfer learning and fine-tuning techniques. A comprehensive evaluation of 35 deep learning models using the publicly available Military-Vehicles dataset on Kaggle identified MobileNet as the most suitable model for ground weapon system classification. The selected MobileNet model achieved an average F1 score of 92% when tested on a dataset comprising five types of ground-weapon systems. In addition, the application of the explainable AI technique Grad-CAM provided insights into the decision-making process of the proposed model and verified its reliability. Real-world evaluations using frames extracted from training videos demonstrated promising accuracy for tanks, canons, and multiple-launch rockets. However, challenges related to object occlusion and the absence of target objects in the images were observed, which resulted in misclassifications. Overall, this study contributes to the development of explainable and reliable AI models for enhancing the performance of ground surveillance and reconnaissance systems.
|
first_indexed | 2024-03-08T15:48:00Z |
format | Article |
id | doaj.art-ae95717ecced47e692d05535e69dac40 |
institution | Directory Open Access Journal |
issn | 2635-5531 2636-1329 |
language | English |
last_indexed | 2024-03-08T15:48:00Z |
publishDate | 2023-12-01 |
publisher | Institute of Defense Acquisition Program |
record_format | Article |
series | 선진국방연구 |
spelling | doaj.art-ae95717ecced47e692d05535e69dac402024-01-09T09:21:27ZengInstitute of Defense Acquisition Program선진국방연구2635-55312636-13292023-12-0163Explainable artificial intelligence models for enhancing classification reliability of ground weapon systemsGimin Bae0Janghyong Lee1orea Army Research Center for Future and InnovationKorea Army Research Center for Future and Innovation This study focused on the development of a reliable artificial intelligence (AI) model to enhance the classification reliability of ground weapon systems for surveillance and reconnaissance applications. The proposed AI model overcomes the limited data availability of military objects such as tanks, canons, and multiple-launch rockets by leveraging transfer learning and fine-tuning techniques. A comprehensive evaluation of 35 deep learning models using the publicly available Military-Vehicles dataset on Kaggle identified MobileNet as the most suitable model for ground weapon system classification. The selected MobileNet model achieved an average F1 score of 92% when tested on a dataset comprising five types of ground-weapon systems. In addition, the application of the explainable AI technique Grad-CAM provided insights into the decision-making process of the proposed model and verified its reliability. Real-world evaluations using frames extracted from training videos demonstrated promising accuracy for tanks, canons, and multiple-launch rockets. However, challenges related to object occlusion and the absence of target objects in the images were observed, which resulted in misclassifications. Overall, this study contributes to the development of explainable and reliable AI models for enhancing the performance of ground surveillance and reconnaissance systems. https://150.95.154.243/index.php/JAMS/article/view/216classification of ground weapon systemsexplainable artificial intelligencetransfer learningMobileNetGrad-CAM |
spellingShingle | Gimin Bae Janghyong Lee Explainable artificial intelligence models for enhancing classification reliability of ground weapon systems 선진국방연구 classification of ground weapon systems explainable artificial intelligence transfer learning MobileNet Grad-CAM |
title | Explainable artificial intelligence models for enhancing classification reliability of ground weapon systems |
title_full | Explainable artificial intelligence models for enhancing classification reliability of ground weapon systems |
title_fullStr | Explainable artificial intelligence models for enhancing classification reliability of ground weapon systems |
title_full_unstemmed | Explainable artificial intelligence models for enhancing classification reliability of ground weapon systems |
title_short | Explainable artificial intelligence models for enhancing classification reliability of ground weapon systems |
title_sort | explainable artificial intelligence models for enhancing classification reliability of ground weapon systems |
topic | classification of ground weapon systems explainable artificial intelligence transfer learning MobileNet Grad-CAM |
url | https://150.95.154.243/index.php/JAMS/article/view/216 |
work_keys_str_mv | AT giminbae explainableartificialintelligencemodelsforenhancingclassificationreliabilityofgroundweaponsystems AT janghyonglee explainableartificialintelligencemodelsforenhancingclassificationreliabilityofgroundweaponsystems |