Geometry of Sensitivity: Twice Sampling and Hybrid Clipping in Differential Privacy with Optimal Gaussian Noise and Application to Deep Learning
We study the fundamental problem of the construction of optimal randomization in Differential Privacy (DP). Depending on the clipping strategy or additional properties of the processing function, the corresponding sensitivity set theoretically determines the necessary randomization to produce the re...
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
ACM|Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security
2023
|
Online Access: | https://hdl.handle.net/1721.1/153139 |
_version_ | 1826211482136739840 |
---|---|
author | Xiao, Hanshen Wan, Jun Devadas, Srinivas |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Xiao, Hanshen Wan, Jun Devadas, Srinivas |
author_sort | Xiao, Hanshen |
collection | MIT |
description | We study the fundamental problem of the construction of optimal randomization in Differential Privacy (DP). Depending on the clipping strategy or additional properties of the processing function, the corresponding sensitivity set theoretically determines the necessary randomization to produce the required security parameters. Towards the optimal utility-privacy tradeoff, finding the minimal perturbation for properly-selected sensitivity sets stands as a central problem in DP research. In practice, l2/l1-norm clippings with Gaussian/Laplace noise mechanisms are among the most common setups. However, they also suffer from the curse of dimensionality. For more generic clipping strategies, the understanding of the optimal noise for a high-dimensional sensitivity set remains limited. This raises challenges in mitigating the worst-case dimension dependence in privacy-preserving randomization, especially for deep learning applications.
In this paper, we revisit the geometry of high-dimensional sensitivity sets and present a series of results to characterize the non-asymptotically optimal Gaussian noise for Rényi DP (RDP). Our results are both negative and positive: on one hand, we show the curse of dimensionality is tight for a broad class of sensitivity sets satisfying certain symmetry properties; but if, fortunately, the representation of the sensitivity set is asymmetric on some group of orthogonal bases, we show the optimal noise bounds need not be explicitly dependent on either dimension or rank. We also revisit sampling in the high-dimensional scenario, which is the key for both privacy amplification and computation efficiency in large-scale data processing. We propose a novel method, termed twice sampling, which implements both sample-wise and coordinate-wise sampling, to enable Gaussian noises to fit the sensitivity geometry more closely. With closed-form RDP analysis, we prove twice sampling produces asymptotic improvement of the privacy amplification given an additional l∞ -norm restriction, especially for small sampling rate. We also provide concrete applications of our results on practical tasks. Through tighter privacy analysis combined with twice sampling, we efficiently train ResNet22 in low sampling rate on CIFAR10, and achieve 69.7% and 81.6% test accuracy with (ε=2,δ=10-5) and (ε=8,δ=10-5) DP guarantee, respectively. |
first_indexed | 2024-09-23T15:06:38Z |
format | Article |
id | mit-1721.1/153139 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T15:06:38Z |
publishDate | 2023 |
publisher | ACM|Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security |
record_format | dspace |
spelling | mit-1721.1/1531392024-01-31T21:25:40Z Geometry of Sensitivity: Twice Sampling and Hybrid Clipping in Differential Privacy with Optimal Gaussian Noise and Application to Deep Learning Xiao, Hanshen Wan, Jun Devadas, Srinivas Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science We study the fundamental problem of the construction of optimal randomization in Differential Privacy (DP). Depending on the clipping strategy or additional properties of the processing function, the corresponding sensitivity set theoretically determines the necessary randomization to produce the required security parameters. Towards the optimal utility-privacy tradeoff, finding the minimal perturbation for properly-selected sensitivity sets stands as a central problem in DP research. In practice, l2/l1-norm clippings with Gaussian/Laplace noise mechanisms are among the most common setups. However, they also suffer from the curse of dimensionality. For more generic clipping strategies, the understanding of the optimal noise for a high-dimensional sensitivity set remains limited. This raises challenges in mitigating the worst-case dimension dependence in privacy-preserving randomization, especially for deep learning applications. In this paper, we revisit the geometry of high-dimensional sensitivity sets and present a series of results to characterize the non-asymptotically optimal Gaussian noise for Rényi DP (RDP). Our results are both negative and positive: on one hand, we show the curse of dimensionality is tight for a broad class of sensitivity sets satisfying certain symmetry properties; but if, fortunately, the representation of the sensitivity set is asymmetric on some group of orthogonal bases, we show the optimal noise bounds need not be explicitly dependent on either dimension or rank. We also revisit sampling in the high-dimensional scenario, which is the key for both privacy amplification and computation efficiency in large-scale data processing. We propose a novel method, termed twice sampling, which implements both sample-wise and coordinate-wise sampling, to enable Gaussian noises to fit the sensitivity geometry more closely. With closed-form RDP analysis, we prove twice sampling produces asymptotic improvement of the privacy amplification given an additional l∞ -norm restriction, especially for small sampling rate. We also provide concrete applications of our results on practical tasks. Through tighter privacy analysis combined with twice sampling, we efficiently train ResNet22 in low sampling rate on CIFAR10, and achieve 69.7% and 81.6% test accuracy with (ε=2,δ=10-5) and (ε=8,δ=10-5) DP guarantee, respectively. 2023-12-12T14:06:47Z 2023-12-12T14:06:47Z 2023-11-15 2023-12-01T08:45:53Z Article http://purl.org/eprint/type/ConferencePaper 979-8-4007-0050-7 https://hdl.handle.net/1721.1/153139 Xiao, Hanshen, Wan, Jun and Devadas, Srinivas. 2023. "Geometry of Sensitivity: Twice Sampling and Hybrid Clipping in Differential Privacy with Optimal Gaussian Noise and Application to Deep Learning." PUBLISHER_CC en https://doi.org/10.1145/3576915.3623142 Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/ The author(s) application/pdf ACM|Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security Association for Computing Machinery |
spellingShingle | Xiao, Hanshen Wan, Jun Devadas, Srinivas Geometry of Sensitivity: Twice Sampling and Hybrid Clipping in Differential Privacy with Optimal Gaussian Noise and Application to Deep Learning |
title | Geometry of Sensitivity: Twice Sampling and Hybrid Clipping in Differential Privacy with Optimal Gaussian Noise and Application to Deep Learning |
title_full | Geometry of Sensitivity: Twice Sampling and Hybrid Clipping in Differential Privacy with Optimal Gaussian Noise and Application to Deep Learning |
title_fullStr | Geometry of Sensitivity: Twice Sampling and Hybrid Clipping in Differential Privacy with Optimal Gaussian Noise and Application to Deep Learning |
title_full_unstemmed | Geometry of Sensitivity: Twice Sampling and Hybrid Clipping in Differential Privacy with Optimal Gaussian Noise and Application to Deep Learning |
title_short | Geometry of Sensitivity: Twice Sampling and Hybrid Clipping in Differential Privacy with Optimal Gaussian Noise and Application to Deep Learning |
title_sort | geometry of sensitivity twice sampling and hybrid clipping in differential privacy with optimal gaussian noise and application to deep learning |
url | https://hdl.handle.net/1721.1/153139 |
work_keys_str_mv | AT xiaohanshen geometryofsensitivitytwicesamplingandhybridclippingindifferentialprivacywithoptimalgaussiannoiseandapplicationtodeeplearning AT wanjun geometryofsensitivitytwicesamplingandhybridclippingindifferentialprivacywithoptimalgaussiannoiseandapplicationtodeeplearning AT devadassrinivas geometryofsensitivitytwicesamplingandhybridclippingindifferentialprivacywithoptimalgaussiannoiseandapplicationtodeeplearning |