赞
踩
聚类最难得就是确定最佳的聚类数,下面介绍几种方法。
Silhouette系数是对聚类结果有效性的解释和验证,由Peter J. Rousseeuw于1986提出。
图解原理如下:
具体方法如下:
簇内不相似度
。The dispersion coefficient is based on the consensus matrix (i.e. the average of connectivity matrices) and was proposed by Kim et al. (2007) to measure the reproducibility of the clusters obtained from NMF.
it is defined as:
ρ = ∑ i , j = 1 n 4 ( C i j − 1 2 ) 2 ρ = ∑_{i,j=1}^n 4 (C_{ij} - \frac{1}{2})^2 ρ=i,j=1∑n4(Cij−21)2
where n is the total number of samples.
By construction, 0 ≤ ρ ≤ 1 0 \leq \rho \leq 1 0≤ρ≤1 and ρ = 1 \rho =1 ρ=1 only for a perfect consensus matrix, where all entries 0 or 1. A perfect consensus matrix is obtained only when all the connectivity matrices are the same, meaning that the algorithm gave the same clusters at each run. See Kim et al. (2007).
谈到consensus matrix,不得不讨论一下共识聚类(一致性聚类)。共识聚类是为不同的聚类算法,选择最优的聚类数量(K)。其具体的原理是:基于有放回地重采样方法,考量任意一个数据在不同样本中聚类表现的一致性来确定聚类的参数是否合适。
通俗的步骤理解为:*
第一步
:从原始数据中随机抽取子集,当然子集的规模不能太小,最好是原始数据集的半数以上,子集要尽量多,以确保里面的每一个数据都多次被取到(100次以上)。然后,将聚类方法(可以使K-means或者层次聚类应用于所有的数据子集,执行分别聚类。第二步
:这一步的关键在于建立一个新的矩阵:consensus matrix。我们之前说聚类的输入通常是一个distance matrix。 那么consensus matrix怎么建呢?假设有
D
1
,
D
2
,
.
.
.
D
n
D_1,D_2, . . . D_n
D1,D2,...Dn 等
N
N
N 个数据,那么consensus matrix
是
N
×
N
N \times N
N×N 的方阵。D1 | D2 | … | Dn | |
---|---|---|---|---|
D1 | C11 | C12 | … | C1n |
D2 | C11 | C12 | … | C2n |
… | … | … | Cij | … |
Dn | Cn1 | Cn2 | … | Cnn |
其中, C i j C_{ij} Cij 代表的是在多次的聚类过程中,数据 D i D_i Di 和数据 D j D_j Dj 被聚到同一类里面的概率(该值在0和1之间,等于1代表100次聚类这两个数据点全部在同一个类里面,等于0代表代表100次聚类全部不在同一个类里面。
那么,好的聚类方法会得到怎么样的consensus matrix呢?对了,全部由0或1组成的方阵,代表着那些很像的数据总在一类,而不像的数据则总是不在一类,这正符合了聚类的初衷是吧。
第三步
:再对consensus matrix做一次聚类(这里用层次聚类方便可视化),只有0和1的矩阵,就让是1的都聚在一起,而0的都分开来,用heatmap看起来就是下面这样的。在基于NMF的聚类中,共表型相关性系数的计算是:
Cophenetic Coefficient
How good is the clustering that we just performed? There is an index called Cross Correlation or Cophenetic Correlation Coefficient (CP) that shows the goodness of fit of our clustering similar to the Correlation Coefficient of regression.
To compute the Cophenetic Correlation Coefficient of hierarchical clustering, we need two informations:
- Distance matrix
- Cophenetic matrix
基于层次聚类的方式计算共表型相关系数。
Cophenetic Correlation Coefficient is simply correlation coefficient between distance matrix and Cophenetic matrix = Correl (Dist, CP) = 86.399%. As the value of the Cophenetic Correlation Coefficient is quite close to 100%, we can say that the clustering is quite fit.
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。