Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning

In the past years, Deep Neural Networks (DNNs) have become popular in many disciplines such as Computer Vision (CV), and the evolution of hardware has helped researchers to develop many powerful Deep Learning (DL) models to deal with several problems. One of the most important challenges in the CV a...

Full description

Bibliographic Details
Main Authors: Kyriakos D. Apostolidis, George A. Papakostas
Format: Article
Language:English
Published: MDPI AG 2022-05-01
Series:Journal of Imaging
Subjects:
Online Access:https://www.mdpi.com/2313-433X/8/6/155
_version_ 1827659602347425792
author Kyriakos D. Apostolidis
George A. Papakostas
author_facet Kyriakos D. Apostolidis
George A. Papakostas
author_sort Kyriakos D. Apostolidis
collection DOAJ
description In the past years, Deep Neural Networks (DNNs) have become popular in many disciplines such as Computer Vision (CV), and the evolution of hardware has helped researchers to develop many powerful Deep Learning (DL) models to deal with several problems. One of the most important challenges in the CV area is Medical Image Analysis. However, adversarial attacks have proven to be an important threat to vision systems by significantly reducing the performance of the models. This paper brings to light a different side of digital watermarking, as a potential black-box adversarial attack. In this context, apart from proposing a new category of adversarial attacks named watermarking attacks, we highlighted a significant problem, as the massive use of watermarks, for security reasons, seems to pose significant risks to vision systems. For this purpose, a moment-based local image watermarking method is implemented on three modalities, Magnetic Resonance Images (MRI), Computed Tomography (CT-scans), and X-ray images. The introduced methodology was tested on three state-of-the art CV models, DenseNet 201, DenseNet169, and MobileNetV2. The results revealed that the proposed attack achieved over 50% degradation of the model’s performance in terms of accuracy. Additionally, MobileNetV2 was the most vulnerable model and the modality with the biggest reduction was CT-scans.
first_indexed 2024-03-09T23:24:43Z
format Article
id doaj.art-b447329f6bc5430d9a4febd11bd31e49
institution Directory Open Access Journal
issn 2313-433X
language English
last_indexed 2024-03-09T23:24:43Z
publishDate 2022-05-01
publisher MDPI AG
record_format Article
series Journal of Imaging
spelling doaj.art-b447329f6bc5430d9a4febd11bd31e492023-11-23T17:20:25ZengMDPI AGJournal of Imaging2313-433X2022-05-018615510.3390/jimaging8060155Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep LearningKyriakos D. Apostolidis0George A. Papakostas1MLV Research Group, Department of Computer Science, International Hellenic University, 65404 Kavala, GreeceMLV Research Group, Department of Computer Science, International Hellenic University, 65404 Kavala, GreeceIn the past years, Deep Neural Networks (DNNs) have become popular in many disciplines such as Computer Vision (CV), and the evolution of hardware has helped researchers to develop many powerful Deep Learning (DL) models to deal with several problems. One of the most important challenges in the CV area is Medical Image Analysis. However, adversarial attacks have proven to be an important threat to vision systems by significantly reducing the performance of the models. This paper brings to light a different side of digital watermarking, as a potential black-box adversarial attack. In this context, apart from proposing a new category of adversarial attacks named watermarking attacks, we highlighted a significant problem, as the massive use of watermarks, for security reasons, seems to pose significant risks to vision systems. For this purpose, a moment-based local image watermarking method is implemented on three modalities, Magnetic Resonance Images (MRI), Computed Tomography (CT-scans), and X-ray images. The introduced methodology was tested on three state-of-the art CV models, DenseNet 201, DenseNet169, and MobileNetV2. The results revealed that the proposed attack achieved over 50% degradation of the model’s performance in terms of accuracy. Additionally, MobileNetV2 was the most vulnerable model and the modality with the biggest reduction was CT-scans.https://www.mdpi.com/2313-433X/8/6/155medical image analysisdeep learningcomputer visionadversarial attackwatermarkingrobustness
spellingShingle Kyriakos D. Apostolidis
George A. Papakostas
Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning
Journal of Imaging
medical image analysis
deep learning
computer vision
adversarial attack
watermarking
robustness
title Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning
title_full Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning
title_fullStr Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning
title_full_unstemmed Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning
title_short Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning
title_sort digital watermarking as an adversarial attack on medical image analysis with deep learning
topic medical image analysis
deep learning
computer vision
adversarial attack
watermarking
robustness
url https://www.mdpi.com/2313-433X/8/6/155
work_keys_str_mv AT kyriakosdapostolidis digitalwatermarkingasanadversarialattackonmedicalimageanalysiswithdeeplearning
AT georgeapapakostas digitalwatermarkingasanadversarialattackonmedicalimageanalysiswithdeeplearning