当前位置:   article > 正文

Python环境下基于机器学习(决策树,随机森林,KNN和SVM)的轴承故障诊断_svm神经网络轴承故障诊断python

svm神经网络轴承故障诊断python

故障特征提取就是从振动信号中提取时、频域统计特征,并利用能量值、谱峭度、幅值等指标,提取出故障特征集。对故障特征值进行全面准确地提取,是提高诊断精度的关键,也是整个滚动轴承故障诊断过程中较困难的部分。

一些常见的时域特征和频域特征计算如下:

图片

图片

此外:

时域特征
时域特征提取 - Data螺丝钉的文章 - 知乎
https://zhuanlan.zhihu.com/p/398752292
频域特征:
信号进行频域分析,能提取哪些特征,有什么物理意义呢? - Xinquan的回答 - 知乎
https://www.zhihu.com/question/60550840/answer/177778560
能量特征(小波包子频带能量)
分形特征
熵特征
XSpecEn:两个序列之间的交叉谱熵(cross-spectral entropy)
XSampEn:两个序列之间的交叉样本熵(cross-sample entropy)
参考论文:Physiological time-series analysis using approximate entropy and sample entropy
XPermEn:两个序列之间的交叉排列熵(cross-permutation entropy)
参考论文:The coupling analysis of stock market indices based on cross-permutation entropy
XMSEn:两个序列之间的多尺度交叉熵(multiscale cross-entropy)
参考论文:Multiscale cross entropy: a novel algorithm for analyzing two time series
XK2En:两个序列之间的交叉Kolmogorov熵(cross-Kolmogorov (K2) entropy)
XFuzzEn:两个序列之间的交叉模糊熵(cross-fuzzy entropy)
参考论文:Cross-fuzzy entropy: A new method to test pattern synchrony of bivariate time series
XDistEn:两个序列之间的交叉分布熵(cross-distribution entropy)
参考论文:Analysis of financial stock markets through the multiscale cross-distribution entropy based on the Tsallis entropy
XDistEn:两个序列之间的交叉条件熵(corrected cross-conditional entropy)
参考论文:Conditional entropy approach for the evaluation of the coupling strength
XApEn:两个序列之间的交叉近似熵(cross-approximate entropy)
参考论文:Randomness and degrees of irregularity
SyDyEn:符号动力熵(symbolic dynamic entropy)
参考论文:A fault diagnosis scheme for planetary gearboxes using modified multi-scale symbolic dynamic entropy and mRMR feature selection.
SpecEn:单一序列的谱熵(spectral entropy)
参考论文:A spectral entropy method for distinguishing regular and irregular motion of Hamiltonian systems.
SlopEn:斜率熵(Slope Entropy)
SampEn2D:数据矩阵的二维样本熵(bidimensional sample entropy of a data matrix)
SampEn:单一序列的样本熵(sample entropy)
rXMSEn:细化多尺度交叉熵(refined multiscale cross-entropy)
PhasEn:相位熵(phase entropy)
PermEn2D:二维排列熵( bidimensional permutation entropy)
IncrEn:增量熵(increment entropy)
hXMSEn:两个序列的层次交叉熵(hierarchical cross-entropy)
GridEn:网格分布熵(gridded distribution entropy)
EspEn2D:二维Espinosa 熵(bidimensional Espinosa entropy)
DispEn2D:二维色散熵(bidimensional dispersion entropy)
cXMSEn:复合多尺度交叉熵(composite multiscale cross-entropy)
CoSiEn:余弦相似熵(cosine similarity entropy )
BubbEn:气泡熵(bubble entropy)
AttnEn:注意力熵( attention entropy)
时频域特征
多分辨分析在信号处理中的应用-第1篇
https://zhuanlan.zhihu.com/p/55

鉴于此,本项目在Python环境下采用基于机器学习(决策树,随机森林,KNN和SVM)对轴承进行故障诊断,并利用网格搜索算法对机器学习进行调优,项目所使用的数据非原始数据,是经过特征提取的(峭度等特征),训练集如下:

图片

测试数据如下:

图片

代码如下:

  1. #导入相关模块
  2. import numpy as np
  3. import pandas as pd
  4. import seaborn as sns
  5. from pylab import rcParams
  6. import matplotlib.pyplot as plt
  7. import matplotlib as mpl
  8. from sklearn.preprocessing import OrdinalEncoder
  9. from sklearn.preprocessing import StandardScaler
  10. #加载训练数据importing training data
  11. train_data = pd.read_csv(r'training set.csv')
  12. #对故障类型进行编码encoding type of faults
  13. ord_enc = OrdinalEncoder()
  14. train_data[["fault", "fault_code"]]
  15. train_data['fault_code'].unique()
  16. #每个故障类型的数据点个数
  17. train_data[['fault_code', 'fault']].value_counts()
  18. #特征标准化Scaling
  19. scaler = StandardScaler()
  20. scaled_df = pd.DataFrame(scaler.fit_transform(train_data.iloc[:,:-2]))
  21. scaled_df.head()
  22. #替换列名称
  23. scaled_df.columns = train_data.drop(['fault', 'fault_code'],1).columns
  24. scaled_train_data = pd.concat([scaled_df, train_data[['fault', 'fault_code']]], 1)
  25. scaled_train_data
  26. ## 探索性数据分析
  27. #协相关矩阵
  28. rcParams['figure.figsize'] = 12, 10
  29. sns.heatmap(scaled_train_data.iloc[:,:-2].corr(),annot=True,cmap='RdYlGn')
  30. fig=plt.gcf()
  31. plt.show()
  32. #两两关系图pairplot
  33. rcParams['figure.figsize'] = 6, 5
  34. sns.pairplot(scaled_train_data.drop('fault_code',1),hue='fault',palette='Dark2')
  35. plt.show()
  36. #处理测试数据
  37. #加载测试数据
  38. test_data = pd.read_csv(r'testing set.csv')
  39. test_data
  40. test_data['fault'].value_counts()
  41. #编码
  42. test_data["fault_code"] = ord_enc.transform(test_data[["fault"]])
  43. #数据标准化
  44. scaled_df = pd.DataFrame(scaler.transform(test_data.iloc[:,:-2]))
  45. scaled_df.head()
  46. #替换列名称
  47. scaled_df.columns = test_data.drop(['fault', 'fault_code'],1).columns
  48. scaled_test_data = pd.concat([scaled_df, test_data[['fault', 'fault_code']]], 1)
  49. scaled_test_data
  50. #X_train训练数据
  51. X_train = scaled_train_data.drop(['sd', 'skewness','fault','fault_code'],1)
  52. X_train.head()
  53. #y_train训练标签
  54. y_train = scaled_train_data['fault_code']
  55. y_train.head()
  56. #X_test测试数据
  57. X_test = scaled_test_data.drop(['sd','skewness','fault','fault_code'],1)
  58. X_test.head()
  59. #y_test测试标签
  60. y_test = scaled_test_data['fault_code']
  61. y_test.head()
  62. ##############决策树分类
  63. from sklearn.tree import DecisionTreeClassifier
  64. dt_clf = DecisionTreeClassifier().fit(X_train, y_train)
  65. #预测
  66. dt_predictions = dt_clf.predict(X_test)
  67. print(dt_predictions)
  68. #Train Score Vs Test Score
  69. print('Train Score:',dt_clf.score(X_train, y_train), 'Test Score:',dt_clf.score(X_test, y_test))
  70. #混淆矩阵Confusion Matrix
  71. fig, ax = plt.subplots(figsize=(5,5))
  72. from sklearn.metrics import plot_confusion_matrix
  73. plot_confusion_matrix(dt_clf, X_test, y_test, ax=ax)
  74. #性能分数
  75. from sklearn.metrics import classification_report
  76. labels= ['outer race', 'inner race', 'healthy']
  77. print(classification_report(y_test, dt_predictions, target_names=labels))
  78. # 根据随机搜索的结果创建参数网格
  79. params = {
  80. 'min_samples_leaf': [1, 2, 3, 4, 5, 10],
  81. 'criterion': ["gini", "entropy"],
  82. 'max_depth':[1, 2, 3, 4,6,8],
  83. 'min_samples_split': [2, 3, 4]
  84. }
  85. # 实例化网格搜索模型
  86. from sklearn.model_selection import GridSearchCV
  87. dt_best_clf = GridSearchCV(estimator=dt_clf,
  88. param_grid=params,
  89. cv=4, n_jobs=-1, verbose=1, scoring = "accuracy")
  90. dt_best_clf.fit(X_train, y_train)
  91. #最优估计参数
  92. dt_best_clf.best_estimator_
  93. dt_best_clf = DecisionTreeClassifier(max_depth=2,min_samples_split=4 , random_state=42).fit(X_train, y_train)
  94. #预测
  95. dt_best_predictions = dt_best_clf.predict(X_test)
  96. print(dt_best_predictions)
  97. #Train Score Vs Test Score
  98. print('Train Score:',dt_best_clf.score(X_train, y_train), 'Test Score:',dt_best_clf.score(X_test, y_test))
  99. #混淆矩阵
  100. fig, ax = plt.subplots(figsize=(5,5))
  101. plot_confusion_matrix(dt_best_clf, X_test, y_test, ax=ax)
  102. #性能分数
  103. print(classification_report(y_test, dt_best_predictions, target_names=labels))
  104. #绘制决策树
  105. from sklearn import tree
  106. fig = plt.figure(figsize=(10,10))
  107. _ = tree.plot_tree(dt_best_clf,
  108. feature_names=X_train.columns,
  109. class_names=['inner race', 'outer race', 'healthy'],
  110. filled=True)
  111. ########################################随机森林分类
  112. #导入随机森林分类器importing Random Forest Classifier
  113. from sklearn.ensemble import RandomForestClassifier
  114. rf_clf = RandomForestClassifier(random_state = 2)
  115. #随机森林训练
  116. rf_clf.fit(X_train, y_train)
  117. #测试集预测
  118. rf_predictions = rf_clf.predict(X_test)
  119. #Train score Vs Test score
  120. print('Train score:',rf_clf.score(X_train, y_train), 'Test Score:',rf_clf.score(X_test, y_test))
  121. #混淆矩阵
  122. fig, ax = plt.subplots(figsize=(5,5))
  123. plot_confusion_matrix(rf_clf, X_test, y_test, ax=ax)
  124. #性能分数
  125. print(classification_report(y_test, rf_predictions, target_names=labels))
  126. #搜索参数
  127. n_estimators = [1, 5, 10, 20, 100, 120]
  128. max_depth = [1, 2, 3]
  129. min_samples_split = [2, 3, 4, 6, 8]
  130. min_samples_leaf = [ 1, 2, 3]
  131. random_grid = {'n_estimators': n_estimators,'max_depth': max_depth,'min_samples_split': min_samples_split,
  132. 'min_samples_leaf': min_samples_leaf,}
  133. #使用随机搜索算法using RandomizedSearchCV
  134. from sklearn.model_selection import RandomizedSearchCV
  135. rf_best_clf = RandomizedSearchCV(estimator = rf_clf,param_distributions = random_grid,
  136. n_iter = 1000, cv = 10, verbose=5, n_jobs = -1)
  137. #训练集拟合fitting train set
  138. rf_best_clf.fit(X_train, y_train)
  139. print ('Random grid: ', random_grid, '\n')
  140. #输出最优参数print the best parameters
  141. print ('Best Parameters: ', rf_best_clf.best_params_, ' \n')
  142. #测试
  143. rf_best_clf = RandomForestClassifier(n_estimators:=100, min_samples_split= 4, min_samples_leaf=2, max_depth= 1,
  144. random_state=90).fit(X_train, y_train)
  145. rf_best_predictions = rf_best_clf.predict(X_test)
  146. print(rf_best_predictions)
  147. #Train score Vs test score
  148. print('Train score:',rf_best_clf.score(X_train, y_train), 'Test score:',rf_best_clf.score(X_test, y_test))
  149. #混淆矩阵onfusion Matrix
  150. fig, ax = plt.subplots(figsize=(5,5))
  151. plot_confusion_matrix(rf_best_clf, X_test, y_test, ax=ax)
  152. #性能分数
  153. print(classification_report(y_test, rf_best_predictions, target_names=labels))
  154. ########################################KNN分类
  155. #导入KNeighborsClassifier
  156. from sklearn.neighbors import KNeighborsClassifier
  157. knn_clf = KNeighborsClassifier()
  158. #训练集进行拟合
  159. knn_clf.fit(X_train, y_train)
  160. #进行预测
  161. knn_predictions = knn_clf.predict(X_test)
  162. print(knn_predictions)
  163. #Train score vs test score
  164. print('Train score:',knn_clf.score(X_train, y_train), 'Test score:',knn_clf.score(X_test, y_test))
  165. #混淆矩阵Confusion Matrix
  166. fig, ax = plt.subplots(figsize=(5,5))
  167. plot_confusion_matrix(knn_clf, X_test, y_test, ax=ax)
  168. #性能分数
  169. print(classification_report(y_test, knn_predictions, target_names=labels))
  170. #要微调的超参数列表
  171. leaf_size = [1, 2, 3, 4, 5]
  172. n_neighbors = [1, 2, 3, 4, 5]
  173. p=[1, 2, 3]
  174. #转换为字典
  175. hyperparameters = dict(leaf_size=leaf_size, n_neighbors=n_neighbors, p=p)
  176. #使用网格搜索算法GridSearch
  177. knn_best_clf = GridSearchCV(knn_clf, hyperparameters, cv=10)
  178. knn_best_clf.fit(X_train, y_train)
  179. #输出最优超参数
  180. print('Best leaf_size:', knn_best_clf.best_estimator_.get_params()['leaf_size'])
  181. print('Best p:', knn_best_clf.best_estimator_.get_params()['p'])
  182. print('Best n_neighbors:', knn_best_clf.best_estimator_.get_params()['n_neighbors'])
  183. #预测
  184. knn_best_predictions = knn_best_clf.predict(X_test)
  185. print(knn_best_predictions)
  186. #Train score Vs Test score
  187. print('Train score:',knn_best_clf.score(X_train, y_train), 'Test score:',knn_best_clf.score(X_test, y_test))
  188. #混淆矩阵Confusion Matrix
  189. fig, ax = plt.subplots(figsize=(5,5))
  190. plot_confusion_matrix(knn_best_clf, X_test, y_test, ax=ax)
  191. #性能指标分数
  192. print(classification_report(y_test, knn_best_predictions, target_names=labels))
  193. ############################################SVM分类器
  194. #导入SVM分类器并进行训练
  195. from sklearn.svm import SVC
  196. svm_clf = SVC(kernel='rbf')
  197. svm_clf.fit(X_train,y_train)
  198. #预测
  199. svm_predictions = svm_clf.predict(X_test)
  200. print(svm_predictions)
  201. #train score vs test score
  202. print('Train score:',svm_clf.score(X_train, y_train), 'Test score:',svm_clf.score(X_test, y_test))
  203. #混淆矩阵Confusion Matrix
  204. fig, ax = plt.subplots(figsize=(5,5))
  205. plot_confusion_matrix(svm_clf, X_test, y_test, ax=ax)
  206. #性能指标分数
  207. print(classification_report(y_test, svm_predictions, target_names=labels))
  208. #超参数调优
  209. param_grid = {'C': [0.02,0.021,0.022],
  210. 'gamma': [0.8,0.7,0.6, 0.65],
  211. 'kernel': ['rbf']}
  212. grid = GridSearchCV(SVC(), param_grid, refit = True, verbose = 3, cv=10)
  213. #利用网格搜索算法进行拟合训练
  214. grid.fit(X_train, y_train)
  215. #输出最优超参数
  216. print(grid.best_params_)
  217. print(grid.best_estimator_)
  218. #最优SVM分类器
  219. best_svm_clf = SVC(kernel = 'rbf', C=0.021, gamma=0.6).fit(X_train, y_train)
  220. #最优预测
  221. best_svm_predictions = best_svm_clf.predict(X_test)
  222. print(best_svm_predictions)
  223. #Train score Vs Test score
  224. print('Train score:',best_svm_clf.score(X_train, y_train), 'Test score:',best_svm_clf.score(X_test, y_test))
  225. #混淆矩阵Confusion Matrix
  226. fig, ax = plt.subplots(figsize=(5,5))
  227. plot_confusion_matrix(best_svm_clf, X_test, y_test, ax=ax)
  228. #性能指标分数
  229. print(classification_report(y_test, best_svm_predictions, target_names=labels))

出图如下:

完整代码:Python环境下基于机器学习(决策树,随机森林,KNN和SVM)的轴承故障诊断

擅长领域:现代信号处理,机器学习,深度学习,数字孪生,时间序列分析,设备缺陷检测、设备异常检测、设备智能故障诊断与健康管理PHM等。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Monodyee/article/detail/615575
推荐阅读
相关标签
  

闽ICP备14008679号