赞
踩
Python所有方向路线就是把Python常用的技术点做整理,形成各个领域的知识点汇总,它的用处就在于,你可以按照上面的知识点去找对应的学习资源,保证自己学得较为全面。
工欲善其事必先利其器。学习Python常用的开发软件都在这里了,给大家节省了很多时间。
我们在看视频学习的时候,不能光动眼动脑不动手,比较科学的学习方法是在理解之后运用它们,这时候练手项目就很适合了。
网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。
一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!
from yellowbrick.cluster import KElbowVisualizer
km = KMeans(init="k-means++", random_state=0, n_init="auto")
visualizer = KElbowVisualizer(km, k=(2,10))
visualizer.fit(data_no_outliers)
visualizer.show()
Out[21]:
<Axes: title={'center': 'Distortion Score Elbow for KMeans Clustering'}, xlabel='k', ylabel='distortion score'>
我们可以看到k=6的时候是最好的。
In [22]:
from sklearn.metrics import davies_bouldin_score, silhouette_score, silhouette_samples import matplotlib.cm as cm def make\_Silhouette\_plot(X, n_clusters): plt.xlim([-0.1, 1]) plt.ylim([0, len(X) + (n_clusters + 1) \* 10]) # 建立聚类模型 clusterer = KMeans(n_clusters=n_clusters, max_iter=1000, n_init=10, init="k-means++", random_state=10) # 聚类预测生成标签label cluster_label = clusterer.fit_predict(X) # 计算轮廓系数均值(整体数据样本) silhouette_avg = silhouette_score(X,cluster_label) print(f"n\_clusterers: {n\_clusters}, silhouette\_score\_avg:{silhouette\_avg}") # 单个数据样本 sample_silhouette_value = silhouette_samples(X, cluster_label) y_lower = 10 for i in range(n_clusters): # 第i个簇群的轮廓系数 i_cluster_silhouette_value = sample_silhouette_value[cluster_label == i] # 进行排序 i_cluster_silhouette_value.sort() size_cluster_i = i_cluster_silhouette_value.shape[0] y_upper = y_lower + size_cluster_i # 颜色设置 color = cm.nipy_spectral(float(i) / n_clusters) # 边界填充 plt.fill_betweenx( np.arange(y_lower, y_upper), 0, i_cluster_silhouette_value, facecolor=color, edgecolor=color, alpha=0.7 ) # 添加文本信息 plt.text(-0.05, y_lower + 0.5 \* size_cluster_i, str(i)) y_lower = y_upper + 10 plt.title(f"The Silhouette Plot for n\_cluster = {n\_clusters}", fontsize=26) plt.xlabel("The silhouette coefficient values", fontsize=24) plt.ylabel("Cluter Label", fontsize=24) plt.axvline(x=silhouette_avg, color="red", linestyle="--") # x-y轴的刻度标签 plt.xticks([-0.1,0,0.2,0.4,0.6,0.8,1]) plt.yticks([]) range_n_clusters = list(range(2, 10)) for n in range_n_clusters: print(f"N cluster:{n}") make_Silhouette_plot(data_no_outliers, n) plt.savefig(f"Silhouette\_Plot\_{n}.png") plt.close() N cluster:2 n_clusterers: 2, silhouette_score_avg:0.18112038570087005 ...... N cluster:9 n_clusterers: 9, silhouette_score_avg:0.1465020645956104
不同k值下的轮廓系数对比:
从结果来说,k=6或者5效果都还OK,在这里我们最终选择k=5进行聚类分群:
In [23]:
km = KMeans(n_clusters=5,
init="k-means++",
n_init=10,
max_iter=100,
random_state=42
)
# 对无离群点数据的聚类
clusters_predict = km.fit_predict(data_no_outliers)
聚类效果如何评价?常用的三种评价指标:
In [24]:
from sklearn.metrics import silhouette_score # 轮廓系数
from sklearn.metrics import calinski_harabasz_score
from sklearn.metrics import davies_bouldin_score # 戴维森堡丁指数(DBI)
Davies-Bouldin指数是聚类算法的一种评估方法,其值越小则表示聚类结果越好。该指数的原理是通过比较不同聚类簇之间的距离和不同聚类簇内部距离来测量聚类的效果。其计算方法如下:
通过Davies-Bouldin指数,我们可以比较不同聚类算法、不同参数下的聚类效果,从而选择最佳的聚类方案。Davies-Bouldin指数能够考虑到聚类结果的波动情况,对于相似的聚类结果,其Davies-Bouldin指数较大。因此,Davies-Bouldin指数能够区分不同聚类结果的相似程度。
此外,Davies-Bouldin指数没有假设聚类簇形状和大小的先验知识,因此可以适用于不同聚类场景。
Calinski-Harabasz Score是一种用于评估聚类质量的指标,它基于聚类中心之间的方差和聚类内部的方差之比来计算。该指数越大,表示聚类效果越好。
Calinski-Harbasz Score是通过评估类之间方差和类内方差来计算得分,具体公式表示为:
其中,代表聚类类别数,代表全部数据数目,是类间方差,是类内方差。
的计算公式:
trace只考虑了矩阵对角上的元素,即类中所有数据点到类的欧几里得距离。
的计算公式为:
其中,是类中所有数据的集合,是类q的质点,是所有数据的中心点,是类数据点的总数。
Silhouette Score表示为轮廓系数。
Silhouette Score 是一种衡量聚类结果质量的指标,它结合了聚类内部的紧密度和不同簇之间的分离度。对于每个数据点,Silhouette Score 考虑了以下几个因素:
具体而言,Silhouette Score 计算公式为:
轮廓系数的取值在 -1 到 1 之间,越接近 1 表示聚类效果越好,越接近 -1 则表示聚类结果较差。
In [25]:
print(f"Davies bouldin score: {davies\_bouldin\_score(data\_no\_outliers,clusters\_predict)}")
print(f"Calinski Score: {calinski\_harabasz\_score(data\_no\_outliers,clusters\_predict)}")
print(f"Silhouette Score: {silhouette\_score(data\_no\_outliers,clusters\_predict)}")
Davies bouldin score: 1.6775659296391374
Calinski Score: 6914.724747148267
Silhouette Score: 0.1672869940907178
参考官网学习地址:https://github.com/MaxHalford/prince
In [26]:
import prince import plotly.express as px def get\_pca\_2d(df, predict): """ 建立聚类模型,保留2个主成分 """ pca_2d_object = prince.PCA( n_components=2, # 保留两个主成分 n_iter=3, # 迭代次数 rescale_with_mean=True, # 基于均值和标准差的尺度缩放 rescale_with_std=True, copy=True, check_input=True, engine="sklearn", random_state=42 ) # 模型训练 pca_2d_object.fit(df) # 原数据转换 df_pca_2d = pca_2d_object.transform(df) df_pca_2d.columns = ["comp1", "comp2"] # 添加聚类预测结果 df_pca_2d["cluster"] = predict return pca_2d_object, df_pca_2d # 同样的方式创建保留3个主成分的功能函数 def get\_pca\_3d(df, predict): """ 保留3个主成分 """ pca_3d_object = prince.PCA( n_components=3, # 保留3个主成分 n_iter=3, rescale_with_mean=True, rescale_with_std=True, copy=True, check_input=True, engine='sklearn', random_state=42 ) pca_3d_object.fit(df) df_pca_3d = pca_3d_object.transform(df) df_pca_3d.columns = ["comp1", "comp2", "comp3"] df_pca_3d["cluster"] = predict return pca_3d_object, df_pca_3d
下面是基于2个主成分的可视化绘图函数:
In [27]:
def plot\_pca\_2d(df, title="PCA Space", opacity=0.8, width_line=0.1): """ 2个主成分的降维可视化 """ df = df.astype({"cluster": "object"}) # 指定字段的数据类型 df = df.sort_values("cluster") columns = df.columns[0:3].tolist() # 绘图 fig = px.scatter( df, x=columns[0], y=columns[1], color='cluster', template="plotly", color_discrete_sequence=px.colors.qualitative.Vivid, title=title ) # trace更新 fig.update_traces(marker={ "size": 8, "opacity": opacity, "line":{"width": width_line, "color":"black"} }) # layout更新 fig.update_layout( width=800, # 长宽 height=700, autosize=False, showlegend = True, legend=dict(title_font_family="Times New Roman", font=dict(size= 20)), scene = dict(xaxis=dict(title = 'comp1', titlefont_color = 'black'), yaxis=dict(title = 'comp2', titlefont_color = 'black')), font = dict(family = "Gilroy", color = 'black', size = 15)) fig.show()
下面是基于3个主成分的可视化绘图函数:
In [28]:
def plot\_pca\_3d(df, title="PCA Space", opacity=0.8, width_line=0.1): """ 3个主成分的降维可视化 """ df = df.astype({"cluster": "object"}) df = df.sort_values("cluster") # 定义fig fig = px.scatter_3d( df, x='comp1', y='comp2', z='comp3', color='cluster', template="plotly", color_discrete_sequence=px.colors.qualitative.Vivid, title=title ) # trace更新 fig.update_traces(marker={ "size":4, "opacity":opacity, "line":{"width":width_line, "color":"black"} }) # layout更新 fig.update_layout( width=800, # 长宽 height=800, autosize=True, showlegend = True, legend=dict(title_font_family="Times New Roman", font=dict(size= 20)), scene = dict(xaxis=dict(title = 'comp1', titlefont_color = 'black'), yaxis=dict(title = 'comp2', titlefont_color = 'black'), zaxis=dict(title = 'comp3', titlefont_color = 'black')), font = dict(family = "Gilroy", color = 'black', size = 15)) fig.show()
下面是2维可视化的效果:
In [29]:
pca_2d_object, df_pca_2d = get_pca_2d(data_no_outliers, clusters_predict)
In [30]:
plot_pca_2d(df_pca_2d, title = "PCA Space", opacity=1, width_line = 0.1)
可以看到聚类效果并不是很好,数据并没有隔离开。
下面是3维可视化的效果:
In [31]:
pca_3d_object, df_pca_3d = get_pca_3d(data_no_outliers, clusters_predict)
In [32]:
plot_pca_3d(df_pca_3d, title = "PCA Space", opacity=1, width_line = 0.1)
print("The variability is : ", pca_3d_object.eigenvalues_summary)
The variability is : eigenvalue % of variance % of variance (cumulative)
component
0 2.245 11.81% 11.81%
1 1.774 9.34% 21.15%
2 1.298 6.83% 27.98%
从结果中看到,聚类效果并不是很好,样本并没有分离开。
前面3个主成分的占比总共为27.98%,不足以捕捉到原始的数据信息和模式。下面介绍基于T-SNE的降维,该方法主要是用于高维数据的降维可视化:
取出部分样本
In [33]:
from sklearn.manifold import TSNE
# 无离群点的数据随机取数
sampling_data = data_no_outliers.sample(frac=0.5, replace=True, random_state=1)
# 聚类后的数据随机取数
# clusters\_predict 表示从聚类结果中随机取数
sampling_cluster = pd.DataFrame(clusters_predict).sample(frac=0.5, replace=True, random_state=1)[0].values
sampling_cluster
Out[33]:
array([4, 1, 1, ..., 2, 0, 4])
In [34]:
# 建立降维模型
tsne2 = TSNE(
n_components=2,
learning_rate=500,
init='random',
perplexity=200,
n_iter = 5000)
In [35]:
data_tsne_2d = tsne2.fit_transform(sampling_data)
In [36]:
# 转成df格式 + 原聚类结果
df_tsne_2d = pd.DataFrame(data_tsne_2d, columns=["comp1","comp2"])
df_tsne_2d["cluster"] = sampling_cluster
In [37]:
plot_pca_2d(df_tsne_2d, title = "T-SNE Space", opacity=1, width_line = 0.1)
对聚类后的结果实施T-SNE降维:
In [38]:
# 建立3D降维模型
tsne3 = TSNE(
n_components=3,
learning_rate=500,
init='random',
perplexity=200,
n_iter = 5000
)
In [39]:
# 模型训练并转换数据
data_tsne_3d = tsne3.fit_transform(sampling_data)
In [40]:
# 转成df格式 + 原聚类结果
df_tsne_3d = pd.DataFrame(data_tsne_3d, columns=["comp1","comp2","comp3"])
df_tsne_3d["cluster"] = sampling_cluster
In [41]:
plot_pca_3d(df_tsne_3d, title = "T-SNE Space", opacity=1, width_line = 0.1)
对比两种降维方法在二维效果上的比较:很明显,T-SNE的效果好很多~
将无异常的原始数据df_no_outliers作为特征X,聚类后的标签clusters_predict作为目标标签y,建立一个LGBMClassifer分类模型:
In [42]:
import lightgbm as lgb import shap clf_lgb = lgb.LGBMClassifier(colsample_by_tree=0.8) # 将部分字段的数据类型进行转化 for col in ["job","marital","education","housing","loan","default"]: df_no_outliers[col] = df_no_outliers[col].astype("category") clf_lgb.fit(X=df_no_outliers, y=clusters_predict, feature_name = "auto", categorical_feature = "auto" ) [LightGBM] [Warning] Unknown parameter: colsample_by_tree [LightGBM] [Warning] Unknown parameter: colsample_by_tree [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000593 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 342 [LightGBM] [Info] Number of data points in the train set: 40690, number of used features: 8 [LightGBM] [Info] Start training from score -1.626166 [LightGBM] [Info] Start training from score -1.292930 [LightGBM] [Info] Start training from score -1.412943 [LightGBM] [Info] Start training from score -2.815215 [LightGBM] [Info] Start training from score -1.489282
Out[42]:
LGBMClassifier
LGBMClassifier(colsample_by_tree=0.8)
In [43]:
explainer = shap.TreeExplainer(clf_lgb) # 建立解释器
shap_values = explainer.shap_values(df_no_outliers) # 求出shap值
shap.summary_plot(shap_values, df_no_outliers, plot_type="bar",plot_size=(15,10))
从结果中可以看到,age字段是最为重要的。
In [44]:
y_pred = clf_lgb.predict(df_no_outliers) # 预测
acc = accuracy_score(y_pred, clusters_predict) # 预测值和真实值计算acc
# acc
print('Training-set accuracy score: {0:0.4f}'. format(acc))
[LightGBM] [Warning] Unknown parameter: colsample_by_tree
Training-set accuracy score: 1.0000
In [45]:
# 分类报告
print(classification_report(clusters_predict, y_pred))
precision recall f1-score support
0 1.00 1.00 1.00 8003
1 1.00 1.00 1.00 11168
2 1.00 1.00 1.00 9905
3 1.00 1.00 1.00 2437
4 1.00 1.00 1.00 9177
accuracy 1.00 40690
macro avg 1.00 1.00 1.00 40690
weighted avg 1.00 1.00 1.00 40690
In [46]:
# 原始数据无异常
df_no_outliers = df[df.outliers == 0]
df_no_outliers["cluster"] = clusters_predict # 聚类结果
以聚类的簇结果cluster为分组字段:
In [47]:
df_no_outliers.groupby("cluster").agg({
"job":lambda x: x.value_counts().index[0],
"marital": lambda x: x.value_counts().index[0],
"education":lambda x: x.value_counts().index[0],
"housing":lambda x: x.value_counts().index[0],
"loan":lambda x: x.value_counts().index[0],
"age":"mean",
"balance":"mean",
"default":lambda x: x.value_counts().index[0]
}).sort_values("age").reset_index()
Out[47]:
cluster | job | marital | education | housing | loan | age | balance | default | |
---|---|---|---|---|---|---|---|---|---|
0 | 4 | technician | single | secondary | yes | no | 32.069740 | 794.696306 | no |
1 | 2 | blue-collar | married | secondary | yes | no | 34.569409 | 592.025644 | no |
2 | 3 | management | married | secondary | yes | no | 42.183012 | 7526.310217 | no |
3 | 0 | management | married | tertiary | no | no | 43.773960 | 872.797951 | no |
4 | 1 | blue-collar | married | secondary | no | no | 50.220989 | 836.407504 | no |
参考原英文学习地址:https://towardsdatascience.com/mastering-customer-segmentation-with-llm-3d9008235f41
后面会给大家分享Transformer模型+Kmeans+PCA/T-SNE的方案~
代码已经整理完成,文章梳理中。先提前看看基于Transformer模型转换的数据(数据扩充到384维)使用PCA的效果:的确好太多了!
声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。