当前位置:   article > 正文

【Python】数据挖掘与机器学习(二)

【Python】数据挖掘与机器学习(二)

【Python】数据挖掘与机器学习(二)

【实验1】 小麦种子分类

【实验1】 小麦种子分类(Softmax 回归)
Seeds 数据集存放了不同品种小麦种子的区域(Area)、周长(Perimeter)、压实度
(Compactness)、籽粒长度(Kernel.Length)、籽粒宽度(Kernel.Width)、不对称系数
(Asymmetry.Coeff)、籽粒腹沟长度(Kernel.Groove)以及类别数据(Type)。该数据集总
共210 条记录、7 个特征、1 个类别,分为3 类标签,分别是1,2,3。(数据文件在seeds.csv)
请采用Softmax 回归给出线性回归模型:
(1)训练集与测试集按7:3 划分,分给出模型的权重系数;
(2)给出测试集的混淆矩阵与分类报告;
(3)选做:画出pairplot;
(4)根据训练的模型,给出以下小麦种子的类别分类:
14.56 14.39 0.88 5.57 3.27 2.27 5.22
18.68 16.23 0.89 6.23 3.72 3.22 6.08
12.47 13.55 0.85 5.34 2.91 4.37 5.14
12.97 13.73 0.86 5.39 3.01 4.79 5.28
14.48 14.49 0.85 5.71 3.14 5.30 5.62


代码实现


import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib import pyplot as plt
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split

# 0.数据加载
data = pd.read_csv('seeds.csv')
data = data.values
xx = data[:, 0:-1]
yy = data[:, 7]  # y 的取值是 1,2,3
np.random.seed(2022)  # 设置固定随机种子

# 1.训练集与测试集划分
x_train, x_test, y_train, y_test = train_test_split(xx, yy, test_size=0.3, random_state=2022)
clf = LogisticRegression(max_iter=500).fit(x_train, y_train)  # 定义并训练逻辑回归模型,max_iter 默认是 100,但本例子还没收敛需要设置 500.
print('coef:', clf.coef_)  # 输出回归系数,查看权重
print('intercept:', clf.intercept_)  # 获得截距

# 2.混淆矩阵与分类报告
y_pred = clf.predict(x_test)
cm = confusion_matrix(y_test, y_pred)  # 获得混淆矩阵
print('混淆矩阵:\n', cm)
report = metrics.classification_report(y_test, y_pred)  ##获得分类报告
print('输出分类报告:\n', report)

# 3.pairplot
data = pd.DataFrame(data)
data.rename(
    columns={0: 'Area', 1: 'Perimeter', 2: 'Compactness', 3: 'Kernel.Length', 4: 'Kernel.Width', 5: 'Asymmetry.Coeff',
             6: 'Kernel.Groove', 7: 'Type'}, inplace=True)
kind_dict = {0: "0", 1: "1", 2: "2", 3: "3"}
data['Type'] = data['Type'].map(kind_dict)
sns.pairplot(data, hue='Type')  # OK
# plt.show()

# 给出以下小麦种子的类别分类;
predict = [[14.56, 14.39, 0.88, 5.57, 3.27, 2.27, 5.22],
           [18.68, 16.23, 0.89, 6.23, 3.72, 3.22, 6.08],
           [12.47, 13.55, 0.85, 5.34, 2.91, 4.37, 5.14],
           [12.97, 13.73, 0.86, 5.39, 3.01, 4.79, 5.28],
           [14.48, 14.49, 0.85, 5.71, 3.14, 5.30, 5.62]
           ]
predict = np.array(predict)  # 把 predict 转换为矩阵


def softmax(x):
    e_x = np.exp(x - np.max(x))  # 防止exp()数值溢出
    return e_x / e_x.sum(axis=0)


yy = [np.argmax(softmax(np.dot(clf.coef_, predict[i, :]) + clf.intercept_))
      for i in range(len(predict))]
result = [x + 1 for x in yy]
print( result)
plt.show()

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61

【实验2】 XX 肿瘤分类(LDA)

从sklearn 加载数据集,
from sklearn.datasets import load_breast_cancer
data= load_breast_cancer()
XX 腺癌数据集一共有569 个CT 样本,对疑似肿瘤区域提取了30 个特征,标签为二分类,
分类个数如下表,其中data 有31 列数据,第31 列是样本标签:0-benign,1-malignant:
类型 个数
良性 benign 357
恶性 malignant 212
30 个属性分别是疑似肿瘤区域的半径、纹理灰度、周长、面积、平滑度等10 参数的平
均值(mean)、标准差(standard)与最大值(worst):
(1) radius (mean of distances from center to points on the perimeter)
(2) texture (standard deviation of gray-scale values)
(3) perimeter
(4) area
(5) smoothness (local variation in radius lengths)
(6) compactness (perimeter^2 / area - 1.0)
(7) concavity (severity of concave portions of the contour)
(8) concave points (number of concave portions of the contour)
(9) symmetry
(10) fractal dimension (“coastline approximation” - 1)
以下是各个属性在两类样本中的统计值:
属性 良性 恶性
radius (mean): 6.981 28.11
texture (mean): 9.71 39.28
perimeter (mean): 43.79 188.5
area (mean): 143.5 2501.0
smoothness (mean): 0.053 0.163
compactness (mean): 0.019 0.345
concavity (mean): 0.0 0.427
concave points (mean): 0.0 0.201
symmetry (mean): 0.106 0.304
fractal dimension (mean): 0.05 0.097
radius (standard error): 0.112 2.873
texture (standard error): 0.36 4.885
perimeter (standard error): 0.757 21.98
area (standard error): 6.802 542.2
smoothness (standard error): 0.002 0.031
compactness (standard error): 0.002 0.135
concavity (standard error): 0.0 0.396
concave points (standard error): 0.0 0.053
symmetry (standard error): 0.008 0.079
fractal dimension (standard error): 0.001 0.03
radius (worst): 7.93 36.04
texture (worst): 12.02 49.54
perimeter (worst): 50.41 251.2
area (worst): 185.2 4254.0
smoothness (worst): 0.071 0.223
compactness (worst): 0.027 1.058
concavity (worst): 0.0 1.252
concave points (worst): 0.0 0.291
symmetry (worst): 0.156 0.664
fractal dimension (worst): 0.055 0.208


你的任务:
(1)采用LDA 线性判别分析,完成肿瘤良性与恶性的二分类模型;
(2)拆分数据集为训练集与测试集(6:4 分拆),计算测试集的Accuracy,给出混淆矩阵
与分类报告。
(3)选做:由数据集给出4 折交叉检验的平均结果
(4)选做:现有新测试数据ceshi.csv,请给出数据文件中的预测结果。

代码实现


import numpy as np
from sklearn import metrics
from sklearn.datasets import load_breast_cancer
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split, cross_val_score

# 数据获取
data = load_breast_cancer()
y_data = data.target
x_data = data.data
label_name = data['target_names']
feature_name = data['feature_names']
np.random.seed(2022)  # 设置固定随机种子
# 1、采用 LDA 线性判别分析,完成肿瘤良性与恶性的二分类模型;
lda = LDA(n_components=1)  # 由于是二分类,LDA 只能投影到 1 维直线上。
lda.fit(x_data, y_data)
y_pred = lda.predict(x_data)  # 作预测分类标签
print('accurancy:\n', metrics.accuracy_score(y_data, y_pred))  # 精度
print('precision:\n', metrics.precision_score(y_data, y_pred))
print('recall:\n', metrics.recall_score(y_data, y_pred))

# 2、拆分数据集为训练集与测试集(6:4 分拆),计算测试集的 Accuracy,给出混淆矩阵与分类报告;
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.4, random_state=2022)
clf = LDA(n_components=1).fit(x_train, y_train)
print('coef:', clf.coef_)
print('intercept:', clf.intercept_)
print('Train Accuary:', clf.score(x_train, y_train))  # 检查是否存在差异
print('Test Accuary:', clf.score(x_test, y_test))  # 检查是否存在差异

y_predict = lda.predict(x_test)  # 混淆矩阵,用测试数据去计算预测值
cm = confusion_matrix(y_test, y_predict)  # 获得混淆矩阵
print('混淆矩阵:\n', cm)
report = metrics.classification_report(y_test, y_predict)  ##获得分类报告
print('分类报告: \n', report)


  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38

看到这里的小伙伴,恭喜你又掌握了一个技能

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/笔触狂放9/article/detail/468454
推荐阅读
相关标签