Summary: | Randomized smoothing is a recent technique that
achieves state-of-art performance in training certifiably robust deep neural networks. While the
smoothing family of distributions is often connected to the choice of the norm used for certification, the parameters of these distributions are
always set as global hyper parameters independent
from the input data on which a network is certified. In this work, we revisit Gaussian randomized
smoothing and show that the variance of the Gaussian distribution can be optimized at each input so
as to maximize the certification radius for the construction of the smooth classifier. Since the data
dependent classifier does not directly enjoy sound
certification with existing approaches, we propose
a memory-enhanced data dependent smooth classifier that is certifiable by construction. This new
approach is generic, parameter-free, and easy to
implement. In fact, we show that our data dependent framework can be seamlessly incorporated
into 3 randomized smoothing approaches, leading
to consistent improved certified accuracy. When
this framework is used in the training routine of
these approaches followed by a data dependent
certification, we achieve 9% and 6% improvement
over the certified accuracy of the strongest baseline
for a radius of 0.5 on CIFAR10 and ImageNet.
|