Towards Secure Machine Learning Acceleration: Threats and Defenses Across Algorithms, Architecture, and Circuits

As deep neural networks (DNNs) are widely adopted for high-stakes applications that process sensitive private data and make critical decisions, security concerns about user data and DNN models are growing. In particular, hardware-level vulnerabilities can be exploited to undermine the confidentialit...

Full description

Bibliographic Details
Main Author: Lee, Kyungmi
Other Authors: Chandrakasan, Anantha P.
Format: Thesis
Published: Massachusetts Institute of Technology 2024
Online Access:https://hdl.handle.net/1721.1/156346
_version_ 1826216800734412800
author Lee, Kyungmi
author2 Chandrakasan, Anantha P.
author_facet Chandrakasan, Anantha P.
Lee, Kyungmi
author_sort Lee, Kyungmi
collection MIT
description As deep neural networks (DNNs) are widely adopted for high-stakes applications that process sensitive private data and make critical decisions, security concerns about user data and DNN models are growing. In particular, hardware-level vulnerabilities can be exploited to undermine the confidentiality and integrity required for those applications. However, conventional hardware designs for DNN acceleration largely focus on improving the throughput, energy-efficiency, and area-efficiency, while the hardware-level security solutions are significantly less well understood. This thesis investigates the memory security for DNN accelerators, where the off-chip main memory cannot be trusted. The first part of this thesis illustrates the vulnerability of sparse DNNs to fault injections on their model parameters. It presents SparseBFA, an algorithm to identify the most vulnerable bits among the model parameters of a sparse DNN. SparseBFA shows that a victim DNN is highly susceptible to a few bit flips in the coordinates of sparse weight matrices, less than 0.00005% of the total memory footprint for its parameters. Second, this thesis proposes SecureLoop, a design space exploration framework for secure DNN accelerators that support a trusted execution environment (TEE). Cryptographic operations are tightly coupled with the data movement pattern in secure DNN accelerators, complicating the mapping of DNN workloads. SecureLoop addresses this mapping challenge by using an analytical model to describe the impact of authentication block assignments and a simulated annealing algorithm to perform cross-layer optimizations. The optimal mapping identified by SecureLoop is up to 33% faster and 50% better in energy-delay product compared to conventional mapping algorithms. Finally, this thesis demonstrates the implementation of a secure DNN accelerator targeting resource-constrained edge and mobile devices. This design addresses the implementation-level challenges of supporting a TEE and achieves a low overhead of less than 4% performance slowdown, 16.5% more energy consumption per each multiply-and-accumulate operation, and 8.1% of the accelerator area.
first_indexed 2024-09-23T16:53:20Z
format Thesis
id mit-1721.1/156346
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T16:53:20Z
publishDate 2024
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1563462024-08-22T03:01:51Z Towards Secure Machine Learning Acceleration: Threats and Defenses Across Algorithms, Architecture, and Circuits Lee, Kyungmi Chandrakasan, Anantha P. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science As deep neural networks (DNNs) are widely adopted for high-stakes applications that process sensitive private data and make critical decisions, security concerns about user data and DNN models are growing. In particular, hardware-level vulnerabilities can be exploited to undermine the confidentiality and integrity required for those applications. However, conventional hardware designs for DNN acceleration largely focus on improving the throughput, energy-efficiency, and area-efficiency, while the hardware-level security solutions are significantly less well understood. This thesis investigates the memory security for DNN accelerators, where the off-chip main memory cannot be trusted. The first part of this thesis illustrates the vulnerability of sparse DNNs to fault injections on their model parameters. It presents SparseBFA, an algorithm to identify the most vulnerable bits among the model parameters of a sparse DNN. SparseBFA shows that a victim DNN is highly susceptible to a few bit flips in the coordinates of sparse weight matrices, less than 0.00005% of the total memory footprint for its parameters. Second, this thesis proposes SecureLoop, a design space exploration framework for secure DNN accelerators that support a trusted execution environment (TEE). Cryptographic operations are tightly coupled with the data movement pattern in secure DNN accelerators, complicating the mapping of DNN workloads. SecureLoop addresses this mapping challenge by using an analytical model to describe the impact of authentication block assignments and a simulated annealing algorithm to perform cross-layer optimizations. The optimal mapping identified by SecureLoop is up to 33% faster and 50% better in energy-delay product compared to conventional mapping algorithms. Finally, this thesis demonstrates the implementation of a secure DNN accelerator targeting resource-constrained edge and mobile devices. This design addresses the implementation-level challenges of supporting a TEE and achieves a low overhead of less than 4% performance slowdown, 16.5% more energy consumption per each multiply-and-accumulate operation, and 8.1% of the accelerator area. Ph.D. 2024-08-21T18:58:24Z 2024-08-21T18:58:24Z 2024-05 2024-07-10T13:01:41.321Z Thesis https://hdl.handle.net/1721.1/156346 Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) Copyright retained by author(s) https://creativecommons.org/licenses/by-nc-nd/4.0/ application/pdf Massachusetts Institute of Technology
spellingShingle Lee, Kyungmi
Towards Secure Machine Learning Acceleration: Threats and Defenses Across Algorithms, Architecture, and Circuits
title Towards Secure Machine Learning Acceleration: Threats and Defenses Across Algorithms, Architecture, and Circuits
title_full Towards Secure Machine Learning Acceleration: Threats and Defenses Across Algorithms, Architecture, and Circuits
title_fullStr Towards Secure Machine Learning Acceleration: Threats and Defenses Across Algorithms, Architecture, and Circuits
title_full_unstemmed Towards Secure Machine Learning Acceleration: Threats and Defenses Across Algorithms, Architecture, and Circuits
title_short Towards Secure Machine Learning Acceleration: Threats and Defenses Across Algorithms, Architecture, and Circuits
title_sort towards secure machine learning acceleration threats and defenses across algorithms architecture and circuits
url https://hdl.handle.net/1721.1/156346
work_keys_str_mv AT leekyungmi towardssecuremachinelearningaccelerationthreatsanddefensesacrossalgorithmsarchitectureandcircuits