Distribution Aware Testing Framework for Deep Neural Networks
The increasing use of deep learning (DL) in safety-critical applications highlights the critical need for systematic and effective testing to ensure system reliability and quality. In this context, researchers have conducted various DL testing studies to identify weaknesses in Deep Neural Network (D...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10296901/ |
_version_ | 1827769197466222592 |
---|---|
author | Demet Demir Aysu Betin Can Elif Surer |
author_facet | Demet Demir Aysu Betin Can Elif Surer |
author_sort | Demet Demir |
collection | DOAJ |
description | The increasing use of deep learning (DL) in safety-critical applications highlights the critical need for systematic and effective testing to ensure system reliability and quality. In this context, researchers have conducted various DL testing studies to identify weaknesses in Deep Neural Network (DNN) models, including exploring test coverage, generating challenging test inputs, and test selection. In this study, we propose a generic DNN testing framework that takes into consideration the distribution of test data and prioritizes them based on their potential to cause incorrect predictions by the tested DNN model. We evaluated the proposed framework using the image classification as a use case. We conducted empirical evaluations by implementing each phase with carefully chosen methods. We employed Variational Autoencoders to identify and eliminate out-of-distribution data from the test datasets. Additionally, we prioritize test data that increase uncertainty in the model, as these cases are more likely to reveal potential faults. The elimination of out-of-distribution data enables a more focused analysis to uncover the sources of DNN failures while using prioritized test data reduces the cost of test data labeling. Furthermore, we explored the use of post-hoc explainability methods to identify the cause of incorrect predictions, a process similar to debugging. This study can be a prelude to incorporating explainability methods into the model development process after testing. |
first_indexed | 2024-03-11T12:21:00Z |
format | Article |
id | doaj.art-a0419ce4617143e7852ef516269df117 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-03-11T12:21:00Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-a0419ce4617143e7852ef516269df1172023-11-07T00:02:18ZengIEEEIEEE Access2169-35362023-01-011111948111950510.1109/ACCESS.2023.332782010296901Distribution Aware Testing Framework for Deep Neural NetworksDemet Demir0https://orcid.org/0000-0002-7497-5957Aysu Betin Can1https://orcid.org/0000-0002-4828-0190Elif Surer2https://orcid.org/0000-0002-0738-6669Department of Information Systems, Graduate School of Informatics, Middle East Technical University, Ankara, TurkeyDepartment of Information Systems, Graduate School of Informatics, Middle East Technical University, Ankara, TurkeyDepartment of Modeling and Simulation, Graduate School of Informatics, Middle East Technical University, Ankara, TurkeyThe increasing use of deep learning (DL) in safety-critical applications highlights the critical need for systematic and effective testing to ensure system reliability and quality. In this context, researchers have conducted various DL testing studies to identify weaknesses in Deep Neural Network (DNN) models, including exploring test coverage, generating challenging test inputs, and test selection. In this study, we propose a generic DNN testing framework that takes into consideration the distribution of test data and prioritizes them based on their potential to cause incorrect predictions by the tested DNN model. We evaluated the proposed framework using the image classification as a use case. We conducted empirical evaluations by implementing each phase with carefully chosen methods. We employed Variational Autoencoders to identify and eliminate out-of-distribution data from the test datasets. Additionally, we prioritize test data that increase uncertainty in the model, as these cases are more likely to reveal potential faults. The elimination of out-of-distribution data enables a more focused analysis to uncover the sources of DNN failures while using prioritized test data reduces the cost of test data labeling. Furthermore, we explored the use of post-hoc explainability methods to identify the cause of incorrect predictions, a process similar to debugging. This study can be a prelude to incorporating explainability methods into the model development process after testing.https://ieeexplore.ieee.org/document/10296901/Data distributiondeep learning testingexplainabilitytest selection and prioritizationuncertainty |
spellingShingle | Demet Demir Aysu Betin Can Elif Surer Distribution Aware Testing Framework for Deep Neural Networks IEEE Access Data distribution deep learning testing explainability test selection and prioritization uncertainty |
title | Distribution Aware Testing Framework for Deep Neural Networks |
title_full | Distribution Aware Testing Framework for Deep Neural Networks |
title_fullStr | Distribution Aware Testing Framework for Deep Neural Networks |
title_full_unstemmed | Distribution Aware Testing Framework for Deep Neural Networks |
title_short | Distribution Aware Testing Framework for Deep Neural Networks |
title_sort | distribution aware testing framework for deep neural networks |
topic | Data distribution deep learning testing explainability test selection and prioritization uncertainty |
url | https://ieeexplore.ieee.org/document/10296901/ |
work_keys_str_mv | AT demetdemir distributionawaretestingframeworkfordeepneuralnetworks AT aysubetincan distributionawaretestingframeworkfordeepneuralnetworks AT elifsurer distributionawaretestingframeworkfordeepneuralnetworks |