k-means clustering

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

k-means clustering is a method of vector quantization, originally from signal processing, that is popular for cluster analysis in data mining. k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells.

The problem is computationally difficult (NP-hard); however, efficient heuristic algorithms converge quickly to a local optimum. These are usually similar to the expectation-maximization algorithm for mixtures of Gaussian distributions via an iterative refinement approach employed by both k-means and Gaussian mixture modeling. They both use cluster centers to model the data; however, k-means clustering tends to find clusters of comparable spatial extent, while the expectation-maximization mechanism allows clusters to have different shapes.

The algorithm has a loose relationship to the k-nearest neighbor classifier, a popular machine learning technique for classification that is often confused with k-means due to the name. Applying the 1-nearest neighbor classifier the cluster centers obtained by k-means classifies new data into the existing clusters. This is known as nearest centroid classifier or Rocchio algorithm.

Description[edit]

Given a set of observations (x1, x2, …, xn), where each observation is a d-dimensional real vector, k-means clustering aims to partition the n observations into k (≤ n) sets S = {S1S2, …, Sk} so as to minimize the within-cluster sum of squares (WCSS) (i.e. variance). Formally, the objective is to find:

where μi is the mean of points in Si. This is equivalent to minimizing the pairwise squared deviations of points in the same cluster:

The equivalence can be deduced from identity . Because the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points in different clusters (between-cluster sum of squares, BCSS),[1] which follows from the law of total variance.

History[edit]

The term "k-means" was first used by James MacQueen in 1967,[2] though the idea goes back to Hugo Steinhaus in 1957.[3] The standard algorithm was first proposed by Stuart Lloyd in 1957 as a technique for pulse-code modulation, though it wasn't published outside of Bell Labs until 1982.[4] In 1965, E. W. Forgy published essentially the same method, which is why it is sometimes referred to as Lloyd-Forgy.[5]

Algorithms[edit]

Standard algorithm[edit]

Convergence of k-means

The most common algorithm uses an iterative refinement technique. Due to its ubiquity it is often called the k-means algorithm; it is also referred to as Lloyd's algorithm, particularly in the computer science community.

Given an initial set of k means m1(1),…,mk(1) (see below), the algorithm proceeds by alternating between two steps:[6]

Assignment step: Assign each observation to the cluster whose mean has the least squared Euclidean distance, this is intuitively the "nearest" mean.[7] (Mathematically, this means partitioning the observations according to the Voronoi diagram generated by the means).
where each is assigned to exactly one , even if it could be assigned to two or more of them.
Update step: Calculate the new means to be the centroids of the observations in the new clusters.

The algorithm has converged when the assignments no longer change. The algorithm does not guarantee to find the optimum.[8]

The algorithm is often presented as assigning objects to the nearest cluster by distance. Using a different distance function other than (squared) Euclidean distance may stop the algorithm from converging. Various modifications of k-means such as spherical k-means and k-medoids have been proposed to allow using other distance measures.

Initialization methods[edit]

Commonly used initialization methods are Forgy and Random Partition.[9] The Forgy method randomly chooses k observations from the dataset and uses these as the initial means. The Random Partition method first randomly assigns a cluster to each observation and then proceeds to the update step, thus computing the initial mean to be the centroid of the cluster's randomly assigned points. The Forgy method tends to spread the initial means out, while Random Partition places all of them close to the center of the data set. According to Hamerly et al.,[9] the Random Partition method is generally preferable for algorithms such as the k-harmonic means and fuzzy k-means. For expectation maximization and standard k-means algorithms, the Forgy method of initialization is preferable. A comprehensive study by Celebi et al.,[10] however, found that popular initialization methods such as Forgy, Random Partition, and Maximin often perform poorly, whereas Bradley and Fayyad's approach[11] performs "consistently" in "the best group" and k-means++ performs "generally well".

The algorithm does not guarantee convergence to the global optimum. The result may depend on the initial clusters. As the algorithm is usually fast, it is common to run it multiple times with different starting conditions. However, worst case performance can be slow: in particular certain point sets, even in 2 dimensions, converge in exponential time, that is 2Ω(n).[12] These point sets do not seem to arise in practice: this is corroborated by the fact that the smoothed running time of k-means is polynomial.[13]

The "assignment" step is referred to as the "expectation step", while the "update step" is a maximization step, making this algorithm a variant of the generalized expectation-maximization algorithm.

Complexity[edit]

Finding the optimal solution to the k-means clustering problem for observations in d dimensions is:

  • NP-hard in general Euclidean space d even for 2 clusters,[14][15][16][17]
  • NP-hard for a general number of clusters k even in the plane,[18]
  • if k and d (the dimension) are fixed, the problem can be exactly solved in time , where n is the number of entities to be clustered.[19]

Thus, a variety of heuristic algorithms such as Lloyd's algorithm given above are generally used.

The running time of Lloyd's algorithm (and most variants) is ,[8][20] where:

  • n is the number of d-dimensional vectors
  • k the number of clusters
  • i the number of iterations needed until convergence.

On data that does have a clustering structure, the number of iterations until convergence is often small, and results only improve slightly after the first dozen iterations. Lloyd's algorithm is therefore often considered to be of "linear" complexity in practice, although it is in the worst case superpolynomial when performed until convergence.[21]

  • In the worst-case, Lloyd's algorithm needs iterations, so that the worst-case complexity of Lloyd's algorithm is superpolynomial.[21]
  • Lloyd's k-means algorithm has polynomial smoothed running time. It is shown that[13] for arbitrary set of n points in , if each point is independently perturbed by a normal distribution with mean 0 and variance , then the expected running time of k-means algorithm is bounded by , which is a polynomial in n, k, d and .
  • Better bounds are proven for simple cases. For example, in [22] it is shown that the running time of k-means algorithm is bounded by for n points in an integer lattice .

Lloyd's algorithm is the standard approach for this problem. However, it spends a lot of processing time computing the distances between each of the k cluster centers and the n data points. Since points usually stay in the same clusters after a few iterations, much of this work is unnecessary, making the naive implementation very inefficient. Some implementations use caching and the triangle inequality in order to create bounds and accelerate Lloyd's algorithm.[8][23][24][25][26]

Variations[edit]

  • Jenks natural breaks optimization: k-means applied to univariate data
  • k-medians clustering uses the median in each dimension instead of the mean, and this way minimizes norm (Taxicab geometry).
  • k-medoids (also: Partitioning Around Medoids, PAM) uses the medoid instead of the mean, and this way minimizes the sum of distances for arbitrary distance functions.
  • Fuzzy C-Means Clustering is a soft version of k-means, where each data point has a fuzzy degree of belonging to each cluster.
  • Gaussian mixture models trained with expectation-maximization algorithm (EM algorithm) maintains probabilistic assignments to clusters, instead of deterministic assignments, and multivariate Gaussian distributions instead of means.
  • k-means++ chooses initial centers in a way that gives a provable upper bound on the WCSS objective.
  • The filtering algorithm uses kd-trees to speed up each k-means step.[27]
  • Some methods attempt to speed up each k-means step using the triangle inequality.[23][24][25][28][26]
  • Escape local optima by swapping points between clusters.[8]
  • The Spherical k-means clustering algorithm is suitable for textual data.[29]
  • Hierarchical variants such as Bisecting k-means,[30] X-means clustering[31] and G-means clustering[32] repeatedly split clusters to build a hierarchy, and can also try to automatically determine the optimal number of clusters in a dataset.
  • Internal cluster evaluation measures such as cluster silhouette can be helpful at determining the number of clusters.
  • Minkowski weighted k-means automatically calculates cluster specific feature weights, supporting the intuitive idea that a feature may have different degrees of relevance at different features.[33] These weights can also be used to re-scale a given data set, increasing the likelihood of a cluster validity index to be optimized at the expected number of clusters.[34]
  • Mini-batch k-means: k-means variation using "mini batch" samples for data sets that do not fit into memory.[35]

Hartigan-Wong method[edit]

Hartigan and Wong's method[8] provides a more sophisticated though more computationally expensive way to perform k-means. It is still a heuristic method.

is the individual cost of defined by , with the center of the cluster.

Assignment step: Hartigan and Wong's method starts by partitioning the points into random clusters .

Update step: Next it determines the and for which the following function reaches a minimum

.

For the that reach this minimum, moves from the cluster to the cluster .

Termination: The algorithm terminates once is larger than zero for all .

The algorithm can be sped up by immediately moving from the cluster to the cluster as soon as an have been found for which . This speed up can make the cost of the final result higher.

The function can be relatively efficiently evaluated by making use of the equality[36]

.

Discussion[edit]

A typical example of the k-means convergence to a local minimum. In this example, the result of k-means clustering (the right figure) contradicts the obvious cluster structure of the data set. The small circles are the data points, the four ray stars are the centroids (means). The initial configuration is on the left figure. The algorithm converges after five iterations presented on the figures, from the left to the right. The illustration was prepared with the Mirkes Java applet.[37]
k-means clustering result for the Iris flower data set and actual species visualized using ELKI. Cluster means are marked using larger, semi-transparent symbols.
k-means clustering vs. EM clustering on an artificial dataset ("mouse"). The tendency of k-means to produce equal-sized clusters leads to bad results here, while EM benefits from the Gaussian distributions with different radius present in the data set.

Three key features of k-means that make it efficient are often regarded as its biggest drawbacks:

  • Euclidean distance is used as a metric and variance is used as a measure of cluster scatter.
  • The number of clusters k is an input parameter: an inappropriate choice of k may yield poor results. That is why, when performing k-means, it is important to run diagnostic checks for determining the number of clusters in the data set.
  • Convergence to a local minimum may produce counterintuitive ("wrong") results (see example in Fig.).

A key limitation of k-means is its cluster model. The concept is based on spherical clusters that are separable so that the mean converges towards the cluster center. The clusters are expected to be of similar size, so that the assignment to the nearest cluster center is the correct assignment. When for example applying k-means with a value of onto the well-known Iris flower data set, the result often fails to separate the three Iris species contained in the data set. With , the two visible clusters (one containing two species) will be discovered, whereas with one of the two clusters will be split into two even parts. In fact, is more appropriate for this data set, despite the data set's containing 3 classes. As with any other clustering algorithm, the k-means result makes assumptions that the data satisfy certain criteria. It works well on some data sets, and fails on others.

The result of k-means can be seen as the Voronoi cells of the cluster means. Since data is split halfway between cluster means, this can lead to suboptimal splits as can be seen in the "mouse" example. The Gaussian models used by the Expectation-maximization algorithm (arguably a generalization of k-means) are more flexible by having both variances and covariances. The EM result is thus able to accommodate clusters of variable size much better than k-means as well as correlated clusters (not in this example). K-means is closely related to nonparametric Bayesian modeling.[38]

Applications[edit]

k-means clustering is rather easy to apply to even large data sets, particularly when using heuristics such as Lloyd's algorithm. It has been successfully used in market segmentation, computer vision, geostatistics,[39] astronomy and agriculture. It often is used as a preprocessing step for other algorithms, for example to find a starting configuration.

Vector quantization[edit]

Two-channel (for illustration purposes -- red and green only) color image.
Vector quantization of colors present in the image above into Voronoi cells using k-means.

k-means originates from signal processing, and still finds use in this domain. For example, in computer graphics, color quantization is the task of reducing the color palette of an image to a fixed number of colors k. The k-means algorithm can easily be used for this task and produces competitive results. A use case for this approach is image segmentation. Other uses of vector quantization include non-random sampling, as k-means can easily be used to choose k different but prototypical objects from a large data set for further analysis.

Cluster analysis[edit]

In cluster analysis, the k-means algorithm can be used to partition the input data set into k partitions (clusters).

However, the pure k-means algorithm is not very flexible, and as such is of limited use (except for when vector quantization as above is actually the desired use case). In particular, the parameter k is known to be hard to choose (as discussed above) when not given by external constraints. Another limitation is that it cannot be used with arbitrary distance functions or on non-numerical data. For these use cases, many other algorithms are superior.

Feature learning[edit]

k-means clustering has been used as a feature learning (or dictionary learning) step, in either (semi-)supervised learning or unsupervised learning.[40] The basic approach is first to train a k-means clustering representation, using the input training data (which need not be labelled). Then, to project any input datum into the new feature space, an "encoding" function, such as the thresholded matrix-product of the datum with the centroid locations, computes the distance from the datum to each centroid, or simply an indicator function for the nearest centroid,[40][41] or some smooth transformation of the distance.[42] Alternatively, transforming the sample-cluster distance through a Gaussian RBF, obtains the hidden layer of a radial basis function network.[43]

This use of k-means has been successfully combined with simple, linear classifiers for semi-supervised learning in NLP (specifically for named entity recognition)[44] and in computer vision. On an object recognition task, it was found to exhibit comparable performance with more sophisticated feature learning approaches such as autoencoders and restricted Boltzmann machines.[42] However, it generally requires more data, for equivalent performance, because each data point only contributes to one "feature".[40]

Relation to other algorithms[edit]

Gaussian mixture model[edit]

k-means clustering, and its associated expectation-maximization algorithm, is a special case of a Gaussian mixture model, specifically, the limit of taking all covariances as diagonal, equal and small. It is often easy to generalize a k-means problem into a Gaussian mixture model.[45] Another generalization of the k-means algorithm is the K-SVD algorithm, which estimates data points as a sparse linear combination of "codebook vectors". k-means corresponds to the special case of using a single codebook vector, with a weight of 1.[46]

Principal component analysis[edit]

The relaxed solution of k-means clustering, specified by the cluster indicators, is given by principal component analysis (PCA).[47][48] The PCA subspace spanned by the principal directions is identical to the cluster centroid subspace. The intuition is that k-means describe spherically shaped (ball-like) clusters. If the data has 2 clusters, the line connecting the two centroids is the best 1-dimensional projection direction, which is also the first PCA direction. Cutting the line at the center of mass separates the clusters (this is the continuous relaxation of the discrete cluster indicator). If the data have three clusters, the 2-dimensional plane spanned by three cluster centroids is the best 2-D projection. This plane is also defined by the first two PCA dimensions. Well-separated clusters are effectively modeled by ball-shaped clusters and thus discovered by k-means. Non-ball-shaped clusters are hard to separate when they are close. For example, two half-moon shaped clusters intertwined in space do not separate well when projected onto PCA subspace. k-means should not be expected to do well on this data.[49] It is straightforward to produce counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions.[50]

Mean shift clustering[edit]

Basic mean shift clustering algorithms maintain a set of data points the same size as the input data set. Initially, this set is copied from the input set. Then this set is iteratively replaced by the mean of those points in the set that are within a given distance of that point. By contrast, k-means restricts this updated set to k points usually much less than the number of points in the input data set, and replaces each point in this set by the mean of all points in the input set that are closer to that point than any other (e.g. within the Voronoi partition of each updating point). A mean shift algorithm that is similar then to k-means, called likelihood mean shift, replaces the set of points undergoing replacement by the mean of all points in the input set that are within a given distance of the changing set.[51] One of the advantages of mean shift over k-means is that the number of clusters is not pre-specified, because mean shift is likely to find only a few clusters if only a small number exist. However, mean shift can be much slower than k-means, and still requires selection of a bandwidth parameter. Mean shift has soft variants.

Independent component analysis[edit]

Under sparsity assumptions and when input data is pre-processed with the whitening transformation, k-means produces the solution to the linear independent component analysis (ICA) task. This aids in explaining the successful application of k-means to feature learning.[52]

Bilateral filtering[edit]

k-means implicitly assumes that the ordering of the input data set does not matter. The bilateral filter is similar to k-means and mean shift in that it maintains a set of data points that are iteratively replaced by means. However, the bilateral filter restricts the calculation of the (kernel weighted) mean to include only points that are close in the ordering of the input data.[51] This makes it applicable to problems such as image denoising, where the spatial arrangement of pixels in an image is of critical importance.

Similar problems[edit]

The set of squared error minimizing cluster functions also includes the k-medoids algorithm, an approach which forces the center point of each cluster to be one of the actual points, i.e., it uses medoids in place of centroids.

Software implementations[edit]

Different implementations of the algorithm exhibit performance differences, with the fastest on a test data set finishing in 10 seconds, the slowest taking 25,988 seconds (~7 hours).[1] The differences can be attributed to implementation quality, language and compiler differences, different termination criteria and precision levels, and the use of indexes for acceleration.

Free Software/Open Source[edit]

The following implementations are available under Free/Open Source Software licenses, with publicly available source code.

  • Accord.NET contains C# implementations for k-means, k-means++ and k-modes.
  • ALGLIB contains parallelized C++ and C# implementations for k-means and k-means++.
  • CrimeStat implements two spatial k-means algorithms, one of which allows the user to define the starting locations.
  • ELKI contains k-means (with Lloyd and MacQueen iteration, along with different initializations such as k-means++ initialization) and various more advanced clustering algorithms.
  • Julia contains a k-means implementation in the JuliaStats Clustering package.
  • KNIME contains nodes for k-means and k-medoids.
  • Mahout contains a MapReduce based k-means.
  • mlpack contains a C++ implementation of k-means.
  • Octave contains k-means.
  • OpenCV contains a k-means implementation.
  • Orange includes a component for k-means clustering with automatic selection of k and cluster silhouette scoring.
  • PSPP contains k-means, The QUICK CLUSTER command performs k-means clustering on the dataset.
  • R contains three k-means variations.
  • SciPy and scikit-learn contain multiple k-means implementations.
  • Spark MLlib implements a distributed k-means algorithm.
  • Torch contains an unsup package that provides k-means clustering.
  • Weka contains k-means and x-means.

Proprietary[edit]

The following implementations are available under proprietary license terms, and may not have publicly available source code.

See also[edit]

References[edit]

  1. ^ a b Kriegel, Hans-Peter; Schubert, Erich; Zimek, Arthur (2016). "The (black) art of runtime evaluation: Are we comparing algorithms or implementations?". Knowledge and Information Systems. 52 (2): 341–378. doi:10.1007/s10115-016-1004-2. ISSN 0219-1377.
  2. ^ MacQueen, J. B. (1967). Some Methods for classification and Analysis of Multivariate Observations. Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability. 1. University of California Press. pp. 281–297. MR 0214227. Zbl 0214.46201. Retrieved 2009-04-07.
  3. ^ Steinhaus, H. (1957). "Sur la division des corps matériels en parties". Bull. Acad. Polon. Sci. (in French). 4 (12): 801–804. MR 0090073. Zbl 0079.16403.
  4. ^ Lloyd, S. P. (1957). "Least square quantization in PCM". Bell Telephone Laboratories Paper. Published in journal much later: Lloyd., S. P. (1982). "Least squares quantization in PCM" (PDF). IEEE Transactions on Information Theory. 28 (2): 129–137. CiteSeerX 10.1.1.131.1338. doi:10.1109/TIT.1982.1056489. Retrieved 2009-04-15.
  5. ^ E.W. Forgy (1965). "Cluster analysis of multivariate data: efficiency versus interpretability of classifications". Biometrics. 21 (3): 768–769. JSTOR 2528559.
  6. ^ MacKay, David (2003). "Chapter 20. An Example Inference Task: Clustering" (PDF). Information Theory, Inference and Learning Algorithms. Cambridge University Press. pp. 284–292. ISBN 978-0-521-64298-9. MR 2012999.
  7. ^ Since the square root is a monotone function, this also is the minimum Euclidean distance assignment.
  8. ^ a b c d e Hartigan, J. A.; Wong, M. A. (1979). "Algorithm AS 136: A k-Means Clustering Algorithm". Journal of the Royal Statistical Society, Series C. 28 (1): 100–108. JSTOR 2346830.
  9. ^ a b Hamerly, G.; Elkan, C. (2002). "Alternatives to the k-means algorithm that find better clusterings" (PDF). Proceedings of the eleventh international conference on Information and knowledge management (CIKM).
  10. ^ Celebi, M. E., Kingravi, H. A., and Vela, P. A. (2013). "A comparative study of efficient initialization methods for the k-means clustering algorithm". Expert Systems with Applications. 40 (1): 200–210. arXiv:1209.1960. doi:10.1016/j.eswa.2012.07.021.CS1 maint: Multiple names: authors list (link)
  11. ^ Bradley, Paul S.; Fayyad, Usama M. (1998). "Refining Initial Points for k-Means Clustering". Proceedings of the Fifteenth International Conference on Machine Learning.
  12. ^ Vattani., A. (2011). "k-means requires exponentially many iterations even in the plane" (PDF). Discrete and Computational Geometry. 45 (4): 596–616. doi:10.1007/s00454-011-9340-1.
  13. ^ a b Arthur, D.; Manthey, B.; Roeglin, H. (2009). "k-means has polynomial smoothed complexity". Proceedings of the 50th Symposium on Foundations of Computer Science (FOCS). arXiv:0904.1113.
  14. ^ Garey, M.; Johnson, D.; Witsenhausen, H. (1982-03-01). "The complexity of the generalized Lloyd - Max problem (Corresp.)". IEEE Transactions on Information Theory. 28 (2): 255–256. doi:10.1109/TIT.1982.1056488. ISSN 0018-9448.
  15. ^ Kleinberg, Jon; Papadimitriou, Christos; Raghavan, Prabhakar (1998-12-01). "A Microeconomic View of Data Mining". Data Mining and Knowledge Discovery. 2 (4): 311–324. doi:10.1023/A:1009726428407. ISSN 1384-5810.
  16. ^ Aloise, D.; Deshpande, A.; Hansen, P.; Popat, P. (2009). "NP-hardness of Euclidean sum-of-squares clustering". Machine Learning. 75 (2): 245–249. doi:10.1007/s10994-009-5103-0.
  17. ^ Dasgupta, S.; Freund, Y. (July 2009). "Random Projection Trees for Vector Quantization". Information Theory, IEEE Transactions on. 55 (7): 3229–3242. arXiv:0805.1390. doi:10.1109/TIT.2009.2021326.
  18. ^ Mahajan, M.; Nimbhorkar, P.; Varadarajan, K. (2009). The Planar k-Means Problem is NP-Hard. Lecture Notes in Computer Science. 5431. pp. 274–285. CiteSeerX 10.1.1.331.1306. doi:10.1007/978-3-642-00202-1_24. ISBN 978-3-642-00201-4.
  19. ^ Inaba, M.; Katoh, N.; Imai, H. (1994). Applications of weighted Voronoi diagrams and randomization to variance-based k-clustering. Proceedings of 10th ACM Symposium on Computational Geometry. pp. 332–339. doi:10.1145/177424.178042.
  20. ^ D., Manning, Christopher (2008). Introduction to information retrieval. Raghavan, Prabhakar., Schütze, Hinrich. New York: Cambridge University Press. ISBN 978-0521865715. OCLC 190786122.
  21. ^ a b Arthur, David; Vassilvitskii, Sergei (2006-01-01). How Slow is the k-means Method?. Proceedings of the Twenty-second Annual Symposium on Computational Geometry. SCG '06. New York, NY, USA: ACM. pp. 144–153. doi:10.1145/1137856.1137880. ISBN 978-1595933409.
  22. ^ Arthur; Abhishek Bhowmick (2009). A theoretical analysis of Lloyd's algorithm for k-means clustering (PDF) (Thesis). Archived from the original (PDF) on 2015-12-08.
  23. ^ a b Phillips, Steven J. (2002-01-04). "Acceleration of K-Means and Related Clustering Algorithms". In Mount, David M.; Stein, Clifford. Acceleration of k-Means and Related Clustering Algorithms. Lecture Notes in Computer Science. 2409. Springer Berlin Heidelberg. pp. 166–177. doi:10.1007/3-540-45643-0_13. ISBN 978-3-540-43977-6.
  24. ^ a b Elkan, C. (2003). "Using the triangle inequality to accelerate k-means" (PDF). Proceedings of the Twentieth International Conference on Machine Learning (ICML).
  25. ^ a b Hamerly, Greg. "Making k-means even faster". CiteSeerX 10.1.1.187.3017.
  26. ^ a b Hamerly, Greg; Drake, Jonathan (2015). Accelerating Lloyd's algorithm for k-means clustering. Partitional Clustering Algorithms. pp. 41–78. doi:10.1007/978-3-319-09259-1_2. ISBN 978-3-319-09258-4.
  27. ^ Kanungo, T.; Mount, D. M.; Netanyahu, N. S.; Piatko, C. D.; Silverman, R.; Wu, A. Y. (2002). "An efficient k-means clustering algorithm: Analysis and implementation" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 24 (7): 881–892. doi:10.1109/TPAMI.2002.1017616. Retrieved 2009-04-24.CS1 maint: Multiple names: authors list (link)
  28. ^ Drake, Jonathan (2012). "Accelerated k-means with adaptive distance bounds" (PDF). The 5th NIPS Workshop on Optimization for Machine Learning, OPT2012.
  29. ^ Dhillon, I. S.; Modha, D. M. (2001). "Concept decompositions for large sparse text data using clustering". Machine Learning. 42 (1): 143–175. doi:10.1023/a:1007612920971.
  30. ^ Steinbach, M., Karypis, G., & Kumar, V. (2000, August). A comparison of document clustering techniques. In KDD workshop on text mining (Vol. 400, No. 1, pp. 525-526).
  31. ^ Pelleg, D., & Moore, A. W. (2000, June). X-means: Extending k-means with Efficient Estimation of the Number of Clusters. In ICML (Vol. 1).
  32. ^ Hamerly, G., & Elkan, C. (2004). Learning the k in k-means. Advances in neural information processing systems, 16, 281.
  33. ^ Amorim, R.C.; Mirkin, B. (2012). "Minkowski Metric, Feature Weighting and Anomalous Cluster Initialisation in k-Means Clustering". Pattern Recognition. 45 (3): 1061–1075. doi:10.1016/j.patcog.2011.08.012.
  34. ^ Amorim, R.C.; Hennig, C. (2015). "Recovering the number of clusters in data sets with noise features using feature rescaling factors". Information Sciences. 324: 126–145. arXiv:1602.06989. doi:10.1016/j.ins.2015.06.039.
  35. ^ Sculley, David (2010). "Web-scale k-means clustering". Proceedings of the 19th international conference on World Wide Web. ACM. pp. 1177–1178. Retrieved 2016-12-21.
  36. ^ Telgarsky, Matus. "Hartigan's Method: k-means Clustering without Voronoi" (PDF).
  37. ^ Mirkes, E.M. "K-means and k-medoids applet". Retrieved 2 January 2016.
  38. ^ Kulis, Brian; Jordan, Michael I. (2012-06-26). Revisiting k-means: new algorithms via Bayesian nonparametrics (PDF). ICML. pp. 1131–1138. ISBN 9781450312851.
  39. ^ Honarkhah, M; Caers, J (2010). "Stochastic Simulation of Patterns Using Distance-Based Pattern Modeling". Mathematical Geosciences. 42 (5): 487–517. doi:10.1007/s11004-010-9276-7.
  40. ^ a b c Coates, Adam; Ng, Andrew Y. (2012). "Learning feature representations with k-means" (PDF). In G. Montavon, G. B. Orr, K.-R. Müller. Neural Networks: Tricks of the Trade. Springer.CS1 maint: Uses editors parameter (link)
  41. ^ Csurka, Gabriella; Dance, Christopher C.; Fan, Lixin; Willamowski, Jutta; Bray, Cédric (2004). Visual categorization with bags of keypoints (PDF). ECCV Workshop on Statistical Learning in Computer Vision.
  42. ^ a b Coates, Adam; Lee, Honglak; Ng, Andrew Y. (2011). An analysis of single-layer networks in unsupervised feature learning (PDF). International Conference on Artificial Intelligence and Statistics (AISTATS). Archived from the original (PDF) on 2013-05-10.
  43. ^ Schwenker, Friedhelm; Kestler, Hans A.; Palm, Günther (2001). "Three learning phases for radial-basis-function networks". Neural Networks. 14 (4–5): 439–458. CiteSeerX 10.1.1.109.312. doi:10.1016/s0893-6080(01)00027-2.
  44. ^ Lin, Dekang; Wu, Xiaoyun (2009). Phrase clustering for discriminative learning (PDF). Annual Meeting of the ACL and IJCNLP. pp. 1030–1038.
  45. ^ Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 16.1. Gaussian Mixture Models and k-Means Clustering". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
  46. ^ Aharon, Michal; Elad, Michael; Bruckstein, Alfred (2006). "K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation" (PDF). IEEE Transactions on Signal Processing. 54 (11): 4311. Bibcode:2006ITSP...54.4311A. doi:10.1109/TSP.2006.881199. Archived from the original (PDF) on 2013-06-20.
  47. ^ H. Zha, C. Ding, M. Gu, X. He and H.D. Simon (Dec 2001). "Spectral Relaxation for k-means Clustering" (PDF). Neural Information Processing Systems Vol.14 (NIPS 2001): 1057–1064.CS1 maint: Uses authors parameter (link)
  48. ^ Chris Ding and Xiaofeng He (July 2004). "K-means Clustering via Principal Component Analysis" (PDF). Proc. Of Int'l Conf. Machine Learning (ICML 2004): 225–232.CS1 maint: Uses authors parameter (link)
  49. ^ Drineas, P.; A. Frieze; R. Kannan; S. Vempala; V. Vinay (2004). "Clustering large graphs via the singular value decomposition" (PDF). Machine Learning. 56 (1–3): 9–33. doi:10.1023/b:mach.0000033113.59016.96. Retrieved 2012-08-02.
  50. ^ Cohen, M.; S. Elder; C. Musco; C. Musco; M. Persu (2014). "Dimensionality reduction for k-means clustering and low rank approximation (Appendix B)". arXiv:1410.6801 [cs.DS].
  51. ^ a b Little, M.A.; Jones, N.S. (2011). "Generalized Methods and Solvers for Piecewise Constant Signals: Part I" (PDF). Proceedings of the Royal Society A. 467 (2135): 3088–3114. Bibcode:2011RSPSA.467.3088L. doi:10.1098/rspa.2010.0671. PMID 22003312.
  52. ^ Alon Vinnikov and Shai Shalev-Shwartz (2014). "K-means Recovers ICA Filters when Independent Components are Sparse" (PDF). Proc. Of Int'l Conf. Machine Learning (ICML 2014).CS1 maint: Uses authors parameter (link)