Special session on attacking and protecting Artificial Intelligence

Modern artificial intelligence systems largely rely on advanced algorithms, including machine learning techniques such as deep learning. The research community has invested significant efforts in understanding these algorithms, optimally tuning them, and improving their performance, but it has mostl...

Full description

Bibliographic Details
Main Authors: Bhasin, Shivam, Garg, Siddarth, Regazzoni, Francesco
Format: Journal Article
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/147413
Description
Summary:Modern artificial intelligence systems largely rely on advanced algorithms, including machine learning techniques such as deep learning. The research community has invested significant efforts in understanding these algorithms, optimally tuning them, and improving their performance, but it has mostly neglected the security facet of the problem. Recent attacks and exploits demonstrated that machine learning-based algorithms are susceptible to attacks targeting computer systems, including backdoors, hardware trojans and fault attacks, but are also susceptible to a range of attacks specifically targeting them, such as adversarial input perturbations. Implementations of machine learning algorithms are often crucial proprietary assets for companies thus need to be protected. It follows that implementations of artificial intelligence-based algorithms are an attractive target for piracy and illegitimate use and, as such, they need to be protected as all other IPs. This is equally important for machine learning algorithms running on remote servers vulnerable to micro-architectural exploits.