Robustly Learning a Gaussian: Getting Optimal Error, Efficiently
We study the fundamental problem of learning the parameters of a high-dimensional Gaussian in the presence of noise | where an "-fraction of our samples were chosen by an adversary. We give robust estimators that achieve estimation error O(ϵ) in the total variation distance, which is optimal u...
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Published: |
Society for Industrial and Applied Mathematics
2018
|
Online Access: | http://hdl.handle.net/1721.1/116214 https://orcid.org/0000-0003-0048-2559 https://orcid.org/0000-0002-9937-0049 https://orcid.org/0000-0001-7047-0495 |
Summary: | We study the fundamental problem of learning the parameters of a high-dimensional Gaussian in the presence of noise | where an "-fraction of our samples were chosen by an adversary. We give robust estimators that achieve estimation error O(ϵ) in the total variation distance, which is optimal up to a universal constant that is independent of the dimension. In the case where just the mean is unknown, our robustness guarantee is optimal up to a factor of p 2 and the running time is polynomial in d and 1/ϵ. When both the mean and covariance are unknown, the running time is polynomial in d and quasipolynomial in 1/ϵ. Moreover all of our algorithms require only a polynomial number of samples. Our work shows that the same sorts of error guarantees that were established over fifty years ago in the one-dimensional setting can also be achieved by efficient algorithms in high-dimensional settings. |
---|