DeepDIST: a black-box anti-collusion framework for secure distribution of deep models
Due to enormous computing and storage overhead for well-trained Deep Neural Network (DNN) models, protecting the intellectual property of model owners is a pressing need. As the commercialization of deep models is becoming increasingly popular, the pre-trained models delivered to users may suffer fr...
Main Authors: | , , , , , , |
---|---|
Other Authors: | |
Format: | Journal Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/171797 |
_version_ | 1811688173083820032 |
---|---|
author | Cheng, Hang Li, Xibin Wang, Huaxiong Zhang, Xinpeng Liu, Ximeng Wang, Meiqing Li, Fengyong |
author2 | School of Physical and Mathematical Sciences |
author_facet | School of Physical and Mathematical Sciences Cheng, Hang Li, Xibin Wang, Huaxiong Zhang, Xinpeng Liu, Ximeng Wang, Meiqing Li, Fengyong |
author_sort | Cheng, Hang |
collection | NTU |
description | Due to enormous computing and storage overhead for well-trained Deep Neural Network (DNN) models, protecting the intellectual property of model owners is a pressing need. As the commercialization of deep models is becoming increasingly popular, the pre-trained models delivered to users may suffer from being illegally copied, redistributed, or abused. In this paper, we propose DeepDIST, the first end-to-end secure DNNs distribution framework in a black-box scenario. Specifically, our framework adopts a dual-level fingerprint (FP) mechanism to provide reliable ownership verification, and proposes two equivalent transformations that can resist collusion attacks, plus a newly designed similarity loss term to improve the security of the transformations. Unlike the existing passive defense schemes that detect colluding participants, we introduce an active defense strategy, namely damaging the performance of the model after the malicious collusion. The extensive experimental results show that DeepDIST can maintain the accuracy of the host DNN after embedding fingerprint conducted for true traitor tracing, and is robust against several popular model modifications. Furthermore, the anti-collusion effect is evaluated on two typical classification tasks (10-class and 100-class), and the proposed DeepDIST can drop the prediction accuracy of the collusion model to 10% and 1% (random guess), respectively. |
first_indexed | 2024-10-01T05:27:59Z |
format | Journal Article |
id | ntu-10356/171797 |
institution | Nanyang Technological University |
language | English |
last_indexed | 2024-10-01T05:27:59Z |
publishDate | 2023 |
record_format | dspace |
spelling | ntu-10356/1717972023-11-08T02:56:24Z DeepDIST: a black-box anti-collusion framework for secure distribution of deep models Cheng, Hang Li, Xibin Wang, Huaxiong Zhang, Xinpeng Liu, Ximeng Wang, Meiqing Li, Fengyong School of Physical and Mathematical Sciences Science::Mathematics Deep Neural Networks Anti-collusion Due to enormous computing and storage overhead for well-trained Deep Neural Network (DNN) models, protecting the intellectual property of model owners is a pressing need. As the commercialization of deep models is becoming increasingly popular, the pre-trained models delivered to users may suffer from being illegally copied, redistributed, or abused. In this paper, we propose DeepDIST, the first end-to-end secure DNNs distribution framework in a black-box scenario. Specifically, our framework adopts a dual-level fingerprint (FP) mechanism to provide reliable ownership verification, and proposes two equivalent transformations that can resist collusion attacks, plus a newly designed similarity loss term to improve the security of the transformations. Unlike the existing passive defense schemes that detect colluding participants, we introduce an active defense strategy, namely damaging the performance of the model after the malicious collusion. The extensive experimental results show that DeepDIST can maintain the accuracy of the host DNN after embedding fingerprint conducted for true traitor tracing, and is robust against several popular model modifications. Furthermore, the anti-collusion effect is evaluated on two typical classification tasks (10-class and 100-class), and the proposed DeepDIST can drop the prediction accuracy of the collusion model to 10% and 1% (random guess), respectively. This work was supported in part by the National Natural Science Foundation of China under Grant 62172098, Grant 62072109, and Grant 61702105; in part by the Natural Science Foundation of Fujian Province under Grant 2020J01497; and in part by the Education Research Project for Young and Middle-Aged Teachers of the Education Department of Fujian Province under Grant JAT200064. 2023-11-08T02:56:24Z 2023-11-08T02:56:24Z 2023 Journal Article Cheng, H., Li, X., Wang, H., Zhang, X., Liu, X., Wang, M. & Li, F. (2023). DeepDIST: a black-box anti-collusion framework for secure distribution of deep models. IEEE Transactions On Circuits and Systems for Video Technology. https://dx.doi.org/10.1109/TCSVT.2023.3284914 1051-8215 https://hdl.handle.net/10356/171797 10.1109/TCSVT.2023.3284914 2-s2.0-85162685502 en IEEE Transactions on Circuits and Systems for Video Technology © 2023 IEEE. All rights reserved. |
spellingShingle | Science::Mathematics Deep Neural Networks Anti-collusion Cheng, Hang Li, Xibin Wang, Huaxiong Zhang, Xinpeng Liu, Ximeng Wang, Meiqing Li, Fengyong DeepDIST: a black-box anti-collusion framework for secure distribution of deep models |
title | DeepDIST: a black-box anti-collusion framework for secure distribution of deep models |
title_full | DeepDIST: a black-box anti-collusion framework for secure distribution of deep models |
title_fullStr | DeepDIST: a black-box anti-collusion framework for secure distribution of deep models |
title_full_unstemmed | DeepDIST: a black-box anti-collusion framework for secure distribution of deep models |
title_short | DeepDIST: a black-box anti-collusion framework for secure distribution of deep models |
title_sort | deepdist a black box anti collusion framework for secure distribution of deep models |
topic | Science::Mathematics Deep Neural Networks Anti-collusion |
url | https://hdl.handle.net/10356/171797 |
work_keys_str_mv | AT chenghang deepdistablackboxanticollusionframeworkforsecuredistributionofdeepmodels AT lixibin deepdistablackboxanticollusionframeworkforsecuredistributionofdeepmodels AT wanghuaxiong deepdistablackboxanticollusionframeworkforsecuredistributionofdeepmodels AT zhangxinpeng deepdistablackboxanticollusionframeworkforsecuredistributionofdeepmodels AT liuximeng deepdistablackboxanticollusionframeworkforsecuredistributionofdeepmodels AT wangmeiqing deepdistablackboxanticollusionframeworkforsecuredistributionofdeepmodels AT lifengyong deepdistablackboxanticollusionframeworkforsecuredistributionofdeepmodels |