# K-means Clustering

k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into $k$ clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. k-means clustering minimizes within-cluster variances (squared Euclidean distances), but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances. For instance, better Euclidean solutions can be found using k-medians and k-medoids.

The problem is computationally difficult (NP-hard); however, efficient heuristic algorithms converge quickly to a local optimum. These are usually similar to the expectation-maximization algorithm for mixtures of Gaussian distributions via an iterative refinement approach employed by both k-means and Gaussian mixture modeling. They both use cluster centers to model the data; however, $k$-means clustering tends to find clusters of comparable spatial extent, while the Gaussian mixture model allows clusters to have different shapes.

The unsupervised k-means algorithm has a loose relationship to the $k$-nearest neighbor classifier, a popular supervised machine learning technique for classification that is often confused with $k$-means due to the name. Applying the 1-nearest neighbor classifier to the cluster centers obtained by $k$-means classifies new data into the existing clusters. This is known as nearest centroid classifier or Rocchio algorithm.

## Description

Given a set of observations $\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n$, where each observation is a $d$-dimensional real vector, $k$-means clustering aims to partition the $n$ observations into $k \leq n$ sets $S=\{S_1,S_2,\ldots,S_k\}$ so as to minimize the within-cluster sum of squares (WCSS) (i.e. variance). Formally, the objective is to find:

[$]\underset{\mathbf{S}} {\operatorname{arg\,min}} \sum_{i=1}^{k} \sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 = \underset{\mathbf{S}} {\operatorname{arg\,min}} \sum_{i=1}^k |S_i| \operatorname{Var} S_i [$]

where $\boldsymbol\mu_i$ is the mean of points in $S_i$. This is equivalent to minimizing the pairwise squared deviations of points in the same cluster:

[$]\underset{\mathbf{S}} {\operatorname{arg\,min}} \sum_{i=1}^{k} \, \frac{1}{ |S_i|} \, \sum_{\mathbf{x}, \mathbf{y} \in S_i} \left\| \mathbf{x} - \mathbf{y} \right\|^2[$]

The equivalence can be deduced from identity

[$]|S_i|\sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 = \sum_{\mathbf{x}\neq\mathbf{y} \in S_i}\left\|\mathbf x - \mathbf y\right\|^2.[$]

Since the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points in different clusters (between-cluster sum of squares, BCSS),. This deterministic relationship is also related to the law of total variance in probability theory.

## History

The term "$k$-means" was first used by James MacQueen in 1967, though the idea goes back to Hugo Steinhaus in 1956. The standard algorithm was first proposed by Stuart Lloyd of Bell Labs in 1957 as a technique for pulse-code modulation, although it was not published as a journal article until 1982. In 1965, Edward W. Forgy published essentially the same method, which is why it is sometimes referred to as the Lloyd–Forgy algorithm.

## Algorithms

### Standard algorithm (naive k-means)

The most common algorithm uses an iterative refinement technique. Due to its ubiquity, it is often called "the $k$-means algorithm"; it is also referred to as Lloyd's algorithm, particularly in the computer science community. It is sometimes also referred to as "naïve k-means", because there exist much faster alternatives.

Given an initial set of $k$ means $m_1^{(1)},\ldots,m_k^{(1)}$ (see below), the algorithm proceeds by alternating between two steps:

K-means Algorithm
1. Assignment step: Assign each observation to the cluster with the nearest mean: that with the least squared Euclidean distance. (Mathematically, this means partitioning the observations according to the Voronoi diagram generated by the means.)
[$]S_i^{(t)} = \left \{ x_p : \left \| x_p - m^{(t)}_i \right \|^2 \le \left \| x_p - m^{(t)}_j \right \|^2 \ \forall j, 1 \le j \le k \right\},[$]
where each $x_p$ is assigned to exactly one $S^{(t)}$, even if it could be assigned to two or more of them.
2. Update step: Recalculate means (centroids) for observations assigned to each cluster.

[$]m^{(t+1)}_i = \frac{1}{\left|S^{(t)}_i\right|} \sum_{x_j \in S^{(t)}_i} x_j [$]

The algorithm has converged when the assignments no longer change. The algorithm is not guaranteed to find the optimum.

The algorithm is often presented as assigning objects to the nearest cluster by distance. Using a different distance function other than (squared) Euclidean distance may prevent the algorithm from converging. Various modifications of $k$-means such as spherical $k$-means and $k$-medoids have been proposed to allow using other distance measures.

#### Initialization methods

Commonly used initialization methods are Forgy and Random Partition. The Forgy method randomly chooses $k$ observations from the dataset and uses these as the initial means. The Random Partition method first randomly assigns a cluster to each observation and then proceeds to the update step, thus computing the initial mean to be the centroid of the cluster's randomly assigned points. The Forgy method tends to spread the initial means out, while Random Partition places all of them close to the center of the data set. According to Hamerly et al., the Random Partition method is generally preferable for algorithms such as the $k$-harmonic means and fuzzy $k$-means. For expectation maximization and standard $k$-means algorithms, the Forgy method of initialization is preferable. A comprehensive study by Celebi et al., however, found that popular initialization methods such as Forgy, Random Partition, and Maximin often perform poorly, whereas Bradley and Fayyad's approach performs "consistently" in "the best group" and k-means++ performs "generally well".

## Discussion

Three key features of $k$-means that make it efficient are often regarded as its biggest drawbacks:

• Euclidean distance is used as a metric and variance is used as a measure of cluster scatter.
• The number of clusters $k$ is an input parameter: an inappropriate choice of $k$ may yield poor results. That is why, when performing $k$-means, it is important to run diagnostic checks for determining the number of clusters in the data set.
• Convergence to a local minimum may produce counterintuitive ("wrong") results (see example in Fig.). A typical example of the $k$-means convergence to a local minimum. In this example, the result of $k$-means clustering (the right figure) contradicts the obvious cluster structure of the data set. The small circles are the data points, the four ray stars are the centroids (means). The initial configuration is on the left figure. The algorithm converges after five iterations presented on the figures, from the left to the right. The illustration was prepared with the Mirkes Java applet.

A key limitation of $k$-means is its cluster model. The concept is based on spherical clusters that are separable so that the mean converges towards the cluster center. The clusters are expected to be of similar size, so that the assignment to the nearest cluster center is the correct assignment. When for example applying $k$-means with a value of $k=3$ onto the well-known Iris flower data set, the result often fails to separate the three Iris species contained in the data set. With $k=2$, the two visible clusters (one containing two species) will be discovered, whereas with $k=3$ one of the two clusters will be split into two even parts. In fact, $k=2$ is more appropriate for this data set, despite the data set's containing 3 classes. As with any other clustering algorithm, the $k$-means result makes assumptions that the data satisfy certain criteria. It works well on some data sets, and fails on others. k-means clustering result for the Iris flower data set and actual species visualized using ELKI. Cluster means are marked using larger, semi-transparent symbols.

The result of $k$-means can be seen as the Voronoi cells of the cluster means. Since data is split halfway between cluster means, this can lead to suboptimal splits as can be seen in the "mouse" example. The Gaussian models used by the expectation-maximization algorithm (arguably a generalization of $k$-means) are more flexible by having both variances and covariances. The EM result is thus able to accommodate clusters of variable size much better than $k$-means as well as correlated clusters (not in this example). In counterpart, EM requires the optimization of a larger number of free parameters and poses some methodological issues due to vanishing clusters or badly-conditioned covariance matrices. $k$-means is closely related to nonparametric Bayesian modeling. k-means clustering vs. EM clustering on an artificial dataset ("mouse"). The tendency of $k$-means to produce equal-sized clusters leads to bad results here, while EM benefits from the Gaussian distributions with different radius present in the data set.