Summary: | Edge deep learning accelerators are optimised hard ware to enable efficient inference on the edge. The models deployed on these accelerators are often proprietary and thus sensitive for commercial and privacy reasons. In this paper, we demonstrate practical vulnerability of deployed deep learning models to timing side-channel attacks. By measuring the execution time of the inference, the adversary can determine and reconstruct the model from a known family of well known deep learning model and then use available techniques to recover remaining hyperparameters. The vulnerability is validated on Intel Compute Stick 2 for VGG and ResNet family of models. Moreover, the presented attack is quite devastating as it can be performed in a cross-device setting, where adversary profiles constructed on a legally own device can be used to exploit the victim device with a single query and still can achieve near perfect success rate.
|