Watermarking of Deep Recurrent Neural Network Using Adversarial Examples to Protect Intellectual Property
In the present era, deep learning algorithms are the key elements of several state-of-the-art solutions. But developing these algorithms for production requires a huge volume of data, computational resources, and human expertise. Thus, illegal reproduction, distribution, and modification of these mo...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Taylor & Francis Group
2022-12-01
|
Series: | Applied Artificial Intelligence |
Subjects: | |
Online Access: | http://dx.doi.org/10.1080/08839514.2021.2008613 |
_version_ | 1827774620516745216 |
---|---|
author | Pulkit Rathi Saumya Bhadauria Sugandha Rathi |
author_facet | Pulkit Rathi Saumya Bhadauria Sugandha Rathi |
author_sort | Pulkit Rathi |
collection | DOAJ |
description | In the present era, deep learning algorithms are the key elements of several state-of-the-art solutions. But developing these algorithms for production requires a huge volume of data, computational resources, and human expertise. Thus, illegal reproduction, distribution, and modification of these models can cause economic damage to developers and can lead to copyright infringement. We propose a novel watermarking algorithm for deep recurrent neural networks based on adversarial examples that can verify the ownership of the model in a black-box way. In this paper, a novel algorithm to watermark a popular pre-trained speech-to-text deep recurrent neural network model Deep Speech without affecting the accuracy of the model is demonstrated. Watermarking is done by generating a set of adversarial examples by adding noise to the input such that the DeepSpeech model predicts the given input as the target string. In the case of copyright infringement, these adversarial examples can be used to verify ownership of the model. If the alleged stolen model predicts the same target string for the adversarial examples, the ownership of the model is verified. This novel watermarking algorithm can minimize the economic damage to the owners of the deep learning models due to stealing and plagiarizing. |
first_indexed | 2024-03-11T13:40:57Z |
format | Article |
id | doaj.art-f97fccef3f0d4579b963de02d537fa9a |
institution | Directory Open Access Journal |
issn | 0883-9514 1087-6545 |
language | English |
last_indexed | 2024-03-11T13:40:57Z |
publishDate | 2022-12-01 |
publisher | Taylor & Francis Group |
record_format | Article |
series | Applied Artificial Intelligence |
spelling | doaj.art-f97fccef3f0d4579b963de02d537fa9a2023-11-02T13:36:37ZengTaylor & Francis GroupApplied Artificial Intelligence0883-95141087-65452022-12-0136110.1080/08839514.2021.20086132008613Watermarking of Deep Recurrent Neural Network Using Adversarial Examples to Protect Intellectual PropertyPulkit Rathi0Saumya Bhadauria1Sugandha Rathi2ABV-Indian Institute of Information Technology and ManagementABV-Indian Institute of Information Technology and ManagementDepartment of Computer Science, Amity UniversityIn the present era, deep learning algorithms are the key elements of several state-of-the-art solutions. But developing these algorithms for production requires a huge volume of data, computational resources, and human expertise. Thus, illegal reproduction, distribution, and modification of these models can cause economic damage to developers and can lead to copyright infringement. We propose a novel watermarking algorithm for deep recurrent neural networks based on adversarial examples that can verify the ownership of the model in a black-box way. In this paper, a novel algorithm to watermark a popular pre-trained speech-to-text deep recurrent neural network model Deep Speech without affecting the accuracy of the model is demonstrated. Watermarking is done by generating a set of adversarial examples by adding noise to the input such that the DeepSpeech model predicts the given input as the target string. In the case of copyright infringement, these adversarial examples can be used to verify ownership of the model. If the alleged stolen model predicts the same target string for the adversarial examples, the ownership of the model is verified. This novel watermarking algorithm can minimize the economic damage to the owners of the deep learning models due to stealing and plagiarizing.http://dx.doi.org/10.1080/08839514.2021.2008613adversarial examplesdeep neural networkdeep speechspeech-to-text conversionwatermarking |
spellingShingle | Pulkit Rathi Saumya Bhadauria Sugandha Rathi Watermarking of Deep Recurrent Neural Network Using Adversarial Examples to Protect Intellectual Property Applied Artificial Intelligence adversarial examples deep neural network deep speech speech-to-text conversion watermarking |
title | Watermarking of Deep Recurrent Neural Network Using Adversarial Examples to Protect Intellectual Property |
title_full | Watermarking of Deep Recurrent Neural Network Using Adversarial Examples to Protect Intellectual Property |
title_fullStr | Watermarking of Deep Recurrent Neural Network Using Adversarial Examples to Protect Intellectual Property |
title_full_unstemmed | Watermarking of Deep Recurrent Neural Network Using Adversarial Examples to Protect Intellectual Property |
title_short | Watermarking of Deep Recurrent Neural Network Using Adversarial Examples to Protect Intellectual Property |
title_sort | watermarking of deep recurrent neural network using adversarial examples to protect intellectual property |
topic | adversarial examples deep neural network deep speech speech-to-text conversion watermarking |
url | http://dx.doi.org/10.1080/08839514.2021.2008613 |
work_keys_str_mv | AT pulkitrathi watermarkingofdeeprecurrentneuralnetworkusingadversarialexamplestoprotectintellectualproperty AT saumyabhadauria watermarkingofdeeprecurrentneuralnetworkusingadversarialexamplestoprotectintellectualproperty AT sugandharathi watermarkingofdeeprecurrentneuralnetworkusingadversarialexamplestoprotectintellectualproperty |