Integrated image and location analysis for wound classification: a deep learning approach
Abstract The global burden of acute and chronic wounds presents a compelling case for enhancing wound classification methods, a vital step in diagnosing and determining optimal treatments. Recognizing this need, we introduce an innovative multi-modal network based on a deep convolutional neural netw...
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2024-03-01
|
Series: | Scientific Reports |
Subjects: | |
Online Access: | https://doi.org/10.1038/s41598-024-56626-w |
_version_ | 1797233617426972672 |
---|---|
author | Yash Patel Tirth Shah Mrinal Kanti Dhar Taiyu Zhang Jeffrey Niezgoda Sandeep Gopalakrishnan Zeyun Yu |
author_facet | Yash Patel Tirth Shah Mrinal Kanti Dhar Taiyu Zhang Jeffrey Niezgoda Sandeep Gopalakrishnan Zeyun Yu |
author_sort | Yash Patel |
collection | DOAJ |
description | Abstract The global burden of acute and chronic wounds presents a compelling case for enhancing wound classification methods, a vital step in diagnosing and determining optimal treatments. Recognizing this need, we introduce an innovative multi-modal network based on a deep convolutional neural network for categorizing wounds into four categories: diabetic, pressure, surgical, and venous ulcers. Our multi-modal network uses wound images and their corresponding body locations for more precise classification. A unique aspect of our methodology is incorporating a body map system that facilitates accurate wound location tagging, improving upon traditional wound image classification techniques. A distinctive feature of our approach is the integration of models such as VGG16, ResNet152, and EfficientNet within a novel architecture. This architecture includes elements like spatial and channel-wise Squeeze-and-Excitation modules, Axial Attention, and an Adaptive Gated Multi-Layer Perceptron, providing a robust foundation for classification. Our multi-modal network was trained and evaluated on two distinct datasets comprising relevant images and corresponding location information. Notably, our proposed network outperformed traditional methods, reaching an accuracy range of 74.79–100% for Region of Interest (ROI) without location classifications, 73.98–100% for ROI with location classifications, and 78.10–100% for whole image classifications. This marks a significant enhancement over previously reported performance metrics in the literature. Our results indicate the potential of our multi-modal network as an effective decision-support tool for wound image classification, paving the way for its application in various clinical contexts. |
first_indexed | 2024-04-24T16:19:01Z |
format | Article |
id | doaj.art-bd2629b8e5604a6396e91c51cd255ca3 |
institution | Directory Open Access Journal |
issn | 2045-2322 |
language | English |
last_indexed | 2024-04-24T16:19:01Z |
publishDate | 2024-03-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Scientific Reports |
spelling | doaj.art-bd2629b8e5604a6396e91c51cd255ca32024-03-31T11:20:12ZengNature PortfolioScientific Reports2045-23222024-03-0114112010.1038/s41598-024-56626-wIntegrated image and location analysis for wound classification: a deep learning approachYash Patel0Tirth Shah1Mrinal Kanti Dhar2Taiyu Zhang3Jeffrey Niezgoda4Sandeep Gopalakrishnan5Zeyun Yu6Department of Computer Science, University of Wisconsin-MilwaukeeDepartment of Computer Science, University of Wisconsin-MilwaukeeDepartment of Computer Science, University of Wisconsin-MilwaukeeDepartment of Computer Science, University of Wisconsin-MilwaukeeAdvancing the Zenith of Healthcare (AZH) Wound and Vascular CenterCollege of Nursing, University of Wisconsin MilwaukeeDepartment of Computer Science, University of Wisconsin-MilwaukeeAbstract The global burden of acute and chronic wounds presents a compelling case for enhancing wound classification methods, a vital step in diagnosing and determining optimal treatments. Recognizing this need, we introduce an innovative multi-modal network based on a deep convolutional neural network for categorizing wounds into four categories: diabetic, pressure, surgical, and venous ulcers. Our multi-modal network uses wound images and their corresponding body locations for more precise classification. A unique aspect of our methodology is incorporating a body map system that facilitates accurate wound location tagging, improving upon traditional wound image classification techniques. A distinctive feature of our approach is the integration of models such as VGG16, ResNet152, and EfficientNet within a novel architecture. This architecture includes elements like spatial and channel-wise Squeeze-and-Excitation modules, Axial Attention, and an Adaptive Gated Multi-Layer Perceptron, providing a robust foundation for classification. Our multi-modal network was trained and evaluated on two distinct datasets comprising relevant images and corresponding location information. Notably, our proposed network outperformed traditional methods, reaching an accuracy range of 74.79–100% for Region of Interest (ROI) without location classifications, 73.98–100% for ROI with location classifications, and 78.10–100% for whole image classifications. This marks a significant enhancement over previously reported performance metrics in the literature. Our results indicate the potential of our multi-modal network as an effective decision-support tool for wound image classification, paving the way for its application in various clinical contexts.https://doi.org/10.1038/s41598-024-56626-wMulti-modal wound image classificationWound location InformationBody mapCombined image-location analysisDeep learningConvolutional neural networks |
spellingShingle | Yash Patel Tirth Shah Mrinal Kanti Dhar Taiyu Zhang Jeffrey Niezgoda Sandeep Gopalakrishnan Zeyun Yu Integrated image and location analysis for wound classification: a deep learning approach Scientific Reports Multi-modal wound image classification Wound location Information Body map Combined image-location analysis Deep learning Convolutional neural networks |
title | Integrated image and location analysis for wound classification: a deep learning approach |
title_full | Integrated image and location analysis for wound classification: a deep learning approach |
title_fullStr | Integrated image and location analysis for wound classification: a deep learning approach |
title_full_unstemmed | Integrated image and location analysis for wound classification: a deep learning approach |
title_short | Integrated image and location analysis for wound classification: a deep learning approach |
title_sort | integrated image and location analysis for wound classification a deep learning approach |
topic | Multi-modal wound image classification Wound location Information Body map Combined image-location analysis Deep learning Convolutional neural networks |
url | https://doi.org/10.1038/s41598-024-56626-w |
work_keys_str_mv | AT yashpatel integratedimageandlocationanalysisforwoundclassificationadeeplearningapproach AT tirthshah integratedimageandlocationanalysisforwoundclassificationadeeplearningapproach AT mrinalkantidhar integratedimageandlocationanalysisforwoundclassificationadeeplearningapproach AT taiyuzhang integratedimageandlocationanalysisforwoundclassificationadeeplearningapproach AT jeffreyniezgoda integratedimageandlocationanalysisforwoundclassificationadeeplearningapproach AT sandeepgopalakrishnan integratedimageandlocationanalysisforwoundclassificationadeeplearningapproach AT zeyunyu integratedimageandlocationanalysisforwoundclassificationadeeplearningapproach |