AgglomerativeClustering (2024)

class sklearn.cluster.AgglomerativeClustering(n_clusters=2, *, metric='euclidean', memory=None, connectivity=None, compute_full_tree='auto', linkage='ward', distance_threshold=None, compute_distances=False)[source]#

Agglomerative Clustering.

Recursively merges pair of clusters of sample data; uses linkage distance.

Read more in the User Guide.

Parameters:
n_clustersint or None, default=2

The number of clusters to find. It must be None ifdistance_threshold is not None.

metricstr or callable, default=”euclidean”

Metric used to compute the linkage. Can be “euclidean”, “l1”, “l2”,“manhattan”, “cosine”, or “precomputed”. If linkage is “ward”, only“euclidean” is accepted. If “precomputed”, a distance matrix is neededas input for the fit method.

Added in version 1.2.

Deprecated since version 1.4: metric=None is deprecated in 1.4 and will be removed in 1.6.Let metric be the default value (i.e. "euclidean") instead.

memorystr or object with the joblib.Memory interface, default=None

Used to cache the output of the computation of the tree.By default, no caching is done. If a string is given, it is thepath to the caching directory.

connectivityarray-like, sparse matrix, or callable, default=None

Connectivity matrix. Defines for each sample the neighboringsamples following a given structure of the data.This can be a connectivity matrix itself or a callable that transformsthe data into a connectivity matrix, such as derived fromkneighbors_graph. Default is None, i.e, thehierarchical clustering algorithm is unstructured.

compute_full_tree‘auto’ or bool, default=’auto’

Stop early the construction of the tree at n_clusters. This isuseful to decrease computation time if the number of clusters is notsmall compared to the number of samples. This option is useful onlywhen specifying a connectivity matrix. Note also that when varying thenumber of clusters and using caching, it may be advantageous to computethe full tree. It must be True if distance_threshold is notNone. By default compute_full_tree is “auto”, which is equivalentto True when distance_threshold is not None or that n_clustersis inferior to the maximum between 100 or 0.02 * n_samples.Otherwise, “auto” is equivalent to False.

linkage{‘ward’, ‘complete’, ‘average’, ‘single’}, default=’ward’

Which linkage criterion to use. The linkage criterion determines whichdistance to use between sets of observation. The algorithm will mergethe pairs of cluster that minimize this criterion.

  • ‘ward’ minimizes the variance of the clusters being merged.

  • ‘average’ uses the average of the distances of each observation ofthe two sets.

  • ‘complete’ or ‘maximum’ linkage uses the maximum distances betweenall observations of the two sets.

  • ‘single’ uses the minimum of the distances between all observationsof the two sets.

Added in version 0.20: Added the ‘single’ option

For examples comparing different linkage criteria, seeComparing different hierarchical linkage methods on toy datasets.

distance_thresholdfloat, default=None

The linkage distance threshold at or above which clusters will not bemerged. If not None, n_clusters must be None andcompute_full_tree must be True.

Added in version 0.21.

compute_distancesbool, default=False

Computes distances between clusters even if distance_threshold is notused. This can be used to make dendrogram visualization, but introducesa computational and memory overhead.

Added in version 0.24.

For an example of dendrogram visualization, seePlot Hierarchical Clustering Dendrogram.

Attributes:
n_clusters_int

The number of clusters found by the algorithm. Ifdistance_threshold=None, it will be equal to the givenn_clusters.

labels_ndarray of shape (n_samples)

Cluster labels for each point.

n_leaves_int

Number of leaves in the hierarchical tree.

n_connected_components_int

The estimated number of connected components in the graph.

Added in version 0.21: n_connected_components_ was added to replace n_components_.

n_features_in_int

Number of features seen during fit.

Added in version 0.24.

feature_names_in_ndarray of shape (n_features_in_,)

Names of features seen during fit. Defined only when Xhas feature names that are all strings.

Added in version 1.0.

children_array-like of shape (n_samples-1, 2)

The children of each non-leaf node. Values less than n_samplescorrespond to leaves of the tree which are the original samples.A node i greater than or equal to n_samples is a non-leafnode and has children children_[i - n_samples]. Alternativelyat the i-th iteration, children[i][0] and children[i][1]are merged to form node n_samples + i.

distances_array-like of shape (n_nodes-1,)

Distances between nodes in the corresponding place in children_.Only computed if distance_threshold is used or compute_distancesis set to True.

See also

FeatureAgglomeration

Agglomerative clustering but for features instead of samples.

ward_tree

Hierarchical clustering with ward linkage.

Examples

>>> from sklearn.cluster import AgglomerativeClustering>>> import numpy as np>>> X = np.array([[1, 2], [1, 4], [1, 0],...  [4, 2], [4, 4], [4, 0]])>>> clustering = AgglomerativeClustering().fit(X)>>> clusteringAgglomerativeClustering()>>> clustering.labels_array([1, 1, 1, 0, 0, 0])
fit(X, y=None)[source]#

Fit the hierarchical clustering from features, or distance matrix.

Parameters:
Xarray-like, shape (n_samples, n_features) or (n_samples, n_samples)

Training instances to cluster, or distances between instances ifmetric='precomputed'.

yIgnored

Not used, present here for API consistency by convention.

Returns:
selfobject

Returns the fitted instance.

fit_predict(X, y=None)[source]#

Fit and return the result of each sample’s clustering assignment.

In addition to fitting, this method also return the result of theclustering assignment for each sample in the training set.

Parameters:
Xarray-like of shape (n_samples, n_features) or (n_samples, n_samples)

Training instances to cluster, or distances between instances ifaffinity='precomputed'.

yIgnored

Not used, present here for API consistency by convention.

Returns:
labelsndarray of shape (n_samples,)

Cluster labels.

get_metadata_routing()[source]#

Get metadata routing of this object.

Please check User Guide on how the routingmechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulatingrouting information.

get_params(deep=True)[source]#

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator andcontained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

set_params(**params)[source]#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects(such as Pipeline). The latter haveparameters of the form <component>__<parameter> so that it’spossible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

Gallery examples#

AgglomerativeClustering (1)

A demo of structured Ward hierarchical clustering on an image of coins

A demo of structured Ward hierarchical clustering on an image of coins

AgglomerativeClustering (2)

Agglomerative clustering with and without structure

Agglomerative clustering with and without structure

AgglomerativeClustering (3)

Agglomerative clustering with different metrics

Agglomerative clustering with different metrics

AgglomerativeClustering (4)

Comparing different clustering algorithms on toy datasets

Comparing different clustering algorithms on toy datasets

AgglomerativeClustering (5)

Comparing different hierarchical linkage methods on toy datasets

Comparing different hierarchical linkage methods on toy datasets

AgglomerativeClustering (6)

Hierarchical clustering: structured vs unstructured ward

Hierarchical clustering: structured vs unstructured ward

AgglomerativeClustering (7)

Inductive Clustering

Inductive Clustering

AgglomerativeClustering (8)

Plot Hierarchical Clustering Dendrogram

Plot Hierarchical Clustering Dendrogram

AgglomerativeClustering (9)

Various Agglomerative Clustering on a 2D embedding of digits

Various Agglomerative Clustering on a 2D embedding of digits

AgglomerativeClustering (2024)

References

Top Articles
Latest Posts
Article information

Author: Virgilio Hermann JD

Last Updated:

Views: 5638

Rating: 4 / 5 (61 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Virgilio Hermann JD

Birthday: 1997-12-21

Address: 6946 Schoen Cove, Sipesshire, MO 55944

Phone: +3763365785260

Job: Accounting Engineer

Hobby: Web surfing, Rafting, Dowsing, Stand-up comedy, Ghost hunting, Swimming, Amateur radio

Introduction: My name is Virgilio Hermann JD, I am a fine, gifted, beautiful, encouraging, kind, talented, zealous person who loves writing and wants to share my knowledge and understanding with you.