Streamlining DNN obfuscation to defend against model stealing attacks

Side-channel-based Deep Neural Network (DNN) model stealing has become a major concern with the advent of learning-based attacks. In respond to this threat, defence mechanisms have been presented to obfuscate the DNN execution, making it difficult to infer the correlation between side-channel inform...

Full description

Bibliographic Details
Main Authors: Sun, Yidan, Lam, Siew-Kei, Jiang, Guiyuan, He, Peilan
Other Authors: College of Computing and Data Science
Format: Conference Paper
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/178547
https://ieee-cas.org/event/conference/2024-ieee-international-symposium-circuits-and-systems
_version_ 1811687653529092096
author Sun, Yidan
Lam, Siew-Kei
Jiang, Guiyuan
He, Peilan
author2 College of Computing and Data Science
author_facet College of Computing and Data Science
Sun, Yidan
Lam, Siew-Kei
Jiang, Guiyuan
He, Peilan
author_sort Sun, Yidan
collection NTU
description Side-channel-based Deep Neural Network (DNN) model stealing has become a major concern with the advent of learning-based attacks. In respond to this threat, defence mechanisms have been presented to obfuscate the DNN execution, making it difficult to infer the correlation between side-channel information and DNN architecture. However, state-of-the-art (SOTA) DNN obfuscation is time-consuming, requires expertlevel changes in existing DNN compilers (e.g., Tensor Virtual Machine (TVM)), and often relies on prior knowledge of the attack models. In this work, we study the impact of various obfuscation levels on the defence effectiveness, and present a streamlined DNN obfuscation process that is extremely fast and is agnostic to any attack models. Our study reveals that by just modifying the scheduling of DNN operations on the GPU, we can achieve comparable defense performance as the SOTA in an attack agnostic manner. We also propose a simple algorithm that determines an effective scheduling configuration for mitigating DNN model stealing at a fraction of a time required by SOTA obfuscation methods. Our method can be easily integrated into existing DNN compilers as a security feature, even by nonexperts, to protect their DNN against side-channel attacks.
first_indexed 2024-10-01T05:19:44Z
format Conference Paper
id ntu-10356/178547
institution Nanyang Technological University
language English
last_indexed 2024-10-01T05:19:44Z
publishDate 2024
record_format dspace
spelling ntu-10356/1785472024-06-26T00:29:58Z Streamlining DNN obfuscation to defend against model stealing attacks Sun, Yidan Lam, Siew-Kei Jiang, Guiyuan He, Peilan College of Computing and Data Science 2024 IEEE International Symposium on Circuits and Systems (ISCAS) Cyber Security Research Centre @ NTU (CYSREN) Computer and Information Science Deep nerual network Defense side-channel attack Model extraction Side-channel-based Deep Neural Network (DNN) model stealing has become a major concern with the advent of learning-based attacks. In respond to this threat, defence mechanisms have been presented to obfuscate the DNN execution, making it difficult to infer the correlation between side-channel information and DNN architecture. However, state-of-the-art (SOTA) DNN obfuscation is time-consuming, requires expertlevel changes in existing DNN compilers (e.g., Tensor Virtual Machine (TVM)), and often relies on prior knowledge of the attack models. In this work, we study the impact of various obfuscation levels on the defence effectiveness, and present a streamlined DNN obfuscation process that is extremely fast and is agnostic to any attack models. Our study reveals that by just modifying the scheduling of DNN operations on the GPU, we can achieve comparable defense performance as the SOTA in an attack agnostic manner. We also propose a simple algorithm that determines an effective scheduling configuration for mitigating DNN model stealing at a fraction of a time required by SOTA obfuscation methods. Our method can be easily integrated into existing DNN compilers as a security feature, even by nonexperts, to protect their DNN against side-channel attacks. Ministry of Education (MOE) Nanyang Technological University This work was supported in part by NTU-DESAY SV Research Program 2018-0980; and in part by the Ministry of Education, Singapore, under its Academic Research Fund Tier 2, under Grant MOE-T2EP20121-0008. 2024-06-26T00:29:57Z 2024-06-26T00:29:57Z 2024 Conference Paper Sun, Y., Lam, S., Jiang, G. & He, P. (2024). Streamlining DNN obfuscation to defend against model stealing attacks. 2024 IEEE International Symposium on Circuits and Systems (ISCAS). https://hdl.handle.net/10356/178547 https://ieee-cas.org/event/conference/2024-ieee-international-symposium-circuits-and-systems en MOE-T2EP20121-0008 © 2024 IEEE. All rights reserved.
spellingShingle Computer and Information Science
Deep nerual network
Defense side-channel attack
Model extraction
Sun, Yidan
Lam, Siew-Kei
Jiang, Guiyuan
He, Peilan
Streamlining DNN obfuscation to defend against model stealing attacks
title Streamlining DNN obfuscation to defend against model stealing attacks
title_full Streamlining DNN obfuscation to defend against model stealing attacks
title_fullStr Streamlining DNN obfuscation to defend against model stealing attacks
title_full_unstemmed Streamlining DNN obfuscation to defend against model stealing attacks
title_short Streamlining DNN obfuscation to defend against model stealing attacks
title_sort streamlining dnn obfuscation to defend against model stealing attacks
topic Computer and Information Science
Deep nerual network
Defense side-channel attack
Model extraction
url https://hdl.handle.net/10356/178547
https://ieee-cas.org/event/conference/2024-ieee-international-symposium-circuits-and-systems
work_keys_str_mv AT sunyidan streamliningdnnobfuscationtodefendagainstmodelstealingattacks
AT lamsiewkei streamliningdnnobfuscationtodefendagainstmodelstealingattacks
AT jiangguiyuan streamliningdnnobfuscationtodefendagainstmodelstealingattacks
AT hepeilan streamliningdnnobfuscationtodefendagainstmodelstealingattacks