Online k-means clustering

We study the problem of learning a clustering of an online set of points. The specific formulation we use is the k-means objective: At each time step the algorithm has to maintain a set of k candidate centers and the loss incurred by the algorithm is the squared distance between the new point and th...

Full description

Bibliographic Details
Main Authors: Cohen-Addad, V, Guedj, B, Rom, G
Format: Conference item
Language:English
Published: PMLR 2021
_version_ 1797060707011788800
author Cohen-Addad, V
Guedj, B
Rom, G
author_facet Cohen-Addad, V
Guedj, B
Rom, G
author_sort Cohen-Addad, V
collection OXFORD
description We study the problem of learning a clustering of an online set of points. The specific formulation we use is the k-means objective: At each time step the algorithm has to maintain a set of k candidate centers and the loss incurred by the algorithm is the squared distance between the new point and the closest center. The goal is to minimize regret with respect to the best solution to the k-means objective in hindsight. We show that provided the data lies in a bounded region, learning is possible, namely an implementation of the Multiplicative Weights Update Algorithm (MWUA) using a discretized grid achieves a regret bound of O~(T−−√) in expectation. We also present an online-to-offline reduction that shows that an efficient no-regret online algorithm (despite being allowed to choose a different set of candidate centres at each round) implies an offline efficient algorithm for the k-means problem, which is known to be NP-hard. In light of this hardness, we consider the slightly weaker requirement of comparing regret with respect to (1+ϵ)OPT and present a no-regret algorithm with runtime O(Tpoly(log(T),k,d,1/ϵ)O(kd)). Our algorithm is based on maintaining a set of points of bounded size which is a coreset that helps identifying the \emph{relevant} regions of the space for running an adaptive, more efficient, variant of the MWUA. We show that simpler online algorithms, such as \emph{Follow The Leader} (FTL), fail to produce sublinear regret in the worst case. We also report preliminary experiments with synthetic and real-world data. Our theoretical results answer an open question of Dasgupta (2008).
first_indexed 2024-03-06T20:20:55Z
format Conference item
id oxford-uuid:2dc0ce88-fc92-4ef6-9ee1-8bc8c3184620
institution University of Oxford
language English
last_indexed 2024-03-06T20:20:55Z
publishDate 2021
publisher PMLR
record_format dspace
spelling oxford-uuid:2dc0ce88-fc92-4ef6-9ee1-8bc8c31846202022-03-26T12:45:00ZOnline k-means clusteringConference itemhttp://purl.org/coar/resource_type/c_5794uuid:2dc0ce88-fc92-4ef6-9ee1-8bc8c3184620EnglishSymplectic ElementsPMLR2021Cohen-Addad, VGuedj, BRom, GWe study the problem of learning a clustering of an online set of points. The specific formulation we use is the k-means objective: At each time step the algorithm has to maintain a set of k candidate centers and the loss incurred by the algorithm is the squared distance between the new point and the closest center. The goal is to minimize regret with respect to the best solution to the k-means objective in hindsight. We show that provided the data lies in a bounded region, learning is possible, namely an implementation of the Multiplicative Weights Update Algorithm (MWUA) using a discretized grid achieves a regret bound of O~(T−−√) in expectation. We also present an online-to-offline reduction that shows that an efficient no-regret online algorithm (despite being allowed to choose a different set of candidate centres at each round) implies an offline efficient algorithm for the k-means problem, which is known to be NP-hard. In light of this hardness, we consider the slightly weaker requirement of comparing regret with respect to (1+ϵ)OPT and present a no-regret algorithm with runtime O(Tpoly(log(T),k,d,1/ϵ)O(kd)). Our algorithm is based on maintaining a set of points of bounded size which is a coreset that helps identifying the \emph{relevant} regions of the space for running an adaptive, more efficient, variant of the MWUA. We show that simpler online algorithms, such as \emph{Follow The Leader} (FTL), fail to produce sublinear regret in the worst case. We also report preliminary experiments with synthetic and real-world data. Our theoretical results answer an open question of Dasgupta (2008).
spellingShingle Cohen-Addad, V
Guedj, B
Rom, G
Online k-means clustering
title Online k-means clustering
title_full Online k-means clustering
title_fullStr Online k-means clustering
title_full_unstemmed Online k-means clustering
title_short Online k-means clustering
title_sort online k means clustering
work_keys_str_mv AT cohenaddadv onlinekmeansclustering
AT guedjb onlinekmeansclustering
AT romg onlinekmeansclustering