A Comparative Study of Preprocessing and Model Compression Techniques in Deep Learning for Forest Sound Classification

Deep-learning models play a significant role in modern software solutions, with the capabilities of handling complex tasks, improving accuracy, automating processes, and adapting to diverse domains, eventually contributing to advancements in various industries. This study provides a comparative stud...

Full description

Bibliographic Details
Main Authors: Thivindu Paranayapa, Piumini Ranasinghe, Dakshina Ranmal, Dulani Meedeniya, Charith Perera
Format: Article
Language:English
Published: MDPI AG 2024-02-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/24/4/1149
_version_ 1797296994765504512
author Thivindu Paranayapa
Piumini Ranasinghe
Dakshina Ranmal
Dulani Meedeniya
Charith Perera
author_facet Thivindu Paranayapa
Piumini Ranasinghe
Dakshina Ranmal
Dulani Meedeniya
Charith Perera
author_sort Thivindu Paranayapa
collection DOAJ
description Deep-learning models play a significant role in modern software solutions, with the capabilities of handling complex tasks, improving accuracy, automating processes, and adapting to diverse domains, eventually contributing to advancements in various industries. This study provides a comparative study on deep-learning techniques that can also be deployed on resource-constrained edge devices. As a novel contribution, we analyze the performance of seven Convolutional Neural Network models in the context of data augmentation, feature extraction, and model compression using acoustic data. The results show that the best performers can achieve an optimal trade-off between model accuracy and size when compressed with weight and filter pruning followed by 8-bit quantization. In adherence to the study workflow utilizing the forest sound dataset, MobileNet-v3-small and ACDNet achieved accuracies of 87.95% and 85.64%, respectively, while maintaining compact sizes of 243 KB and 484 KB, respectively. Henceforth, this study concludes that CNNs can be optimized and compressed to be deployed in resource-constrained edge devices for classifying forest environment sounds.
first_indexed 2024-03-07T22:14:47Z
format Article
id doaj.art-3f1bcd89f26043e2b237d5a832666a80
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-07T22:14:47Z
publishDate 2024-02-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-3f1bcd89f26043e2b237d5a832666a802024-02-23T15:33:42ZengMDPI AGSensors1424-82202024-02-01244114910.3390/s24041149A Comparative Study of Preprocessing and Model Compression Techniques in Deep Learning for Forest Sound ClassificationThivindu Paranayapa0Piumini Ranasinghe1Dakshina Ranmal2Dulani Meedeniya3Charith Perera4Department of Computer Science & Engineering, University of Moratuwa, Moratuwa 10400, Sri LankaDepartment of Computer Science & Engineering, University of Moratuwa, Moratuwa 10400, Sri LankaDepartment of Computer Science & Engineering, University of Moratuwa, Moratuwa 10400, Sri LankaDepartment of Computer Science & Engineering, University of Moratuwa, Moratuwa 10400, Sri LankaSchool of Computer Science and Informatics, Cardiff University, Cardiff CF24 3AA, UKDeep-learning models play a significant role in modern software solutions, with the capabilities of handling complex tasks, improving accuracy, automating processes, and adapting to diverse domains, eventually contributing to advancements in various industries. This study provides a comparative study on deep-learning techniques that can also be deployed on resource-constrained edge devices. As a novel contribution, we analyze the performance of seven Convolutional Neural Network models in the context of data augmentation, feature extraction, and model compression using acoustic data. The results show that the best performers can achieve an optimal trade-off between model accuracy and size when compressed with weight and filter pruning followed by 8-bit quantization. In adherence to the study workflow utilizing the forest sound dataset, MobileNet-v3-small and ACDNet achieved accuracies of 87.95% and 85.64%, respectively, while maintaining compact sizes of 243 KB and 484 KB, respectively. Henceforth, this study concludes that CNNs can be optimized and compressed to be deployed in resource-constrained edge devices for classifying forest environment sounds.https://www.mdpi.com/1424-8220/24/4/1149augmentationfeature extractionclassificationpruningquantization
spellingShingle Thivindu Paranayapa
Piumini Ranasinghe
Dakshina Ranmal
Dulani Meedeniya
Charith Perera
A Comparative Study of Preprocessing and Model Compression Techniques in Deep Learning for Forest Sound Classification
Sensors
augmentation
feature extraction
classification
pruning
quantization
title A Comparative Study of Preprocessing and Model Compression Techniques in Deep Learning for Forest Sound Classification
title_full A Comparative Study of Preprocessing and Model Compression Techniques in Deep Learning for Forest Sound Classification
title_fullStr A Comparative Study of Preprocessing and Model Compression Techniques in Deep Learning for Forest Sound Classification
title_full_unstemmed A Comparative Study of Preprocessing and Model Compression Techniques in Deep Learning for Forest Sound Classification
title_short A Comparative Study of Preprocessing and Model Compression Techniques in Deep Learning for Forest Sound Classification
title_sort comparative study of preprocessing and model compression techniques in deep learning for forest sound classification
topic augmentation
feature extraction
classification
pruning
quantization
url https://www.mdpi.com/1424-8220/24/4/1149
work_keys_str_mv AT thivinduparanayapa acomparativestudyofpreprocessingandmodelcompressiontechniquesindeeplearningforforestsoundclassification
AT piuminiranasinghe acomparativestudyofpreprocessingandmodelcompressiontechniquesindeeplearningforforestsoundclassification
AT dakshinaranmal acomparativestudyofpreprocessingandmodelcompressiontechniquesindeeplearningforforestsoundclassification
AT dulanimeedeniya acomparativestudyofpreprocessingandmodelcompressiontechniquesindeeplearningforforestsoundclassification
AT charithperera acomparativestudyofpreprocessingandmodelcompressiontechniquesindeeplearningforforestsoundclassification
AT thivinduparanayapa comparativestudyofpreprocessingandmodelcompressiontechniquesindeeplearningforforestsoundclassification
AT piuminiranasinghe comparativestudyofpreprocessingandmodelcompressiontechniquesindeeplearningforforestsoundclassification
AT dakshinaranmal comparativestudyofpreprocessingandmodelcompressiontechniquesindeeplearningforforestsoundclassification
AT dulanimeedeniya comparativestudyofpreprocessingandmodelcompressiontechniquesindeeplearningforforestsoundclassification
AT charithperera comparativestudyofpreprocessingandmodelcompressiontechniquesindeeplearningforforestsoundclassification