Cluster Analysis in High Dimensions: Robustness, Privacy, and Beyond

Cluster analysis focuses on understanding the cluster structure of data, and is perhaps one of the most important subfields in high-dimensional data analysis. Traditionally, cluster analysis focuses on partitioning data into closely related groups, such as in k-means clustering and learning mixture...

Full description

Bibliographic Details
Main Author: Narayanan, Shyam
Other Authors: Indyk, Piotr
Format: Thesis
Published: Massachusetts Institute of Technology 2024
Online Access:https://hdl.handle.net/1721.1/156617
Description
Summary:Cluster analysis focuses on understanding the cluster structure of data, and is perhaps one of the most important subfields in high-dimensional data analysis. Traditionally, cluster analysis focuses on partitioning data into closely related groups, such as in k-means clustering and learning mixture models. However, one sometimes overlooked part of cluster analysis is analyzing data from a single cluster: this encompasses problems such as mean estimation and covariance estimation, which correspond to learning the location and shape of a cluster, respectively. In this thesis, we study various classic problems in high-dimensional cluster analysis, relating to both identifying several clusters and learning a single cluster. We provide improved algorithms and lower bounds for problems including k-means and k-median clustering, Gaussian mean and covariance estimation, high-dimensional mean testing, and learning mixtures of Gaussians. Importantly, in this thesis we also focus on the socially motivated constraints of robustness, privacy, and explainability, and how they affect the complexity of these problems. In our quest to understand cluster analysis under such socially motivated constraints, we discover the first black-box transformation from robustness to privacy, as well as the first-known statistical separation between some natural models of robust statistics.