当前位置:   article > 正文

机器学习的几种方法(knn,逻辑回归,SVM,决策树,随机森林,极限随机树,集成学习,Adaboost,GBDT)_tuple(list(i)+[c])

tuple(list(i)+[c])

 一.判别模式与生成模型基础知识

举例:要确定一个瓜是好瓜还是坏瓜,用判别模型的方法是从历史数据中学习到模型,然后通过提取这个瓜的特征来预测出这只瓜是好瓜的概率,是坏瓜的概率。

举例:利用生成模型是根据好瓜的特征首先学习出一个好瓜的模型,然后根据坏瓜的特征学习得到一个坏瓜的模型,然后从需要预测的瓜中提取特征,放到生成好的好瓜的模型中看概率是多少,在放到生产的坏瓜模型中看概率是多少,哪个概率大就预测其为哪个。

举例:

假如你的任务是识别一个语音属于哪种语言。例如对面一个人走过来,和你说了一句话,你需要识别出她说的到底是汉语、英语还是法语等。那么你可以有两种方法达到这个目的:

1.学习每一种语言,你花了大量精力把汉语、英语和法语等都学会了,我指的学会是你知道什么样的语音对应什么样的语言。然后再有人过来对你说,你就可以知道他说的是什么语言.

2.不去学习每一种语言,你只学习这些语言之间的差别,然后再判断(分类)。意思是指我学会了汉语和英语等语言的发音是有差别的,我学会这种差别就好了。
那么第一种方法就是生成方法,第二种方法是判别方法。

生成模型是所有变量的全概率模型,而判别模型是在给定观测变量值前提下目标变量条件概率模型。因此生成模型能够用于模拟(即生成)模型中任意变量的分布情况,而判别模型只能根据观测变量得到目标变量的采样。判别模型不对观测变量的分布建模,因此它不能够表达观测变量与目标变量之间更复杂的关系。因此,生成模型更适用于无监督的任务,如分类和聚类。

条件概率: 就是事件A在事件B发生的条件下发生的概率。条件概率表示为P(A|B),读作“A在B发生的条件下发生的概率”。

贝叶斯公式:

P(X) 代表 X 事件发生的概率,也称为先验概率;

P(Y|X) 代表在 X 事件发生的前提下,Y 事件发生的概率,也称为似然率;

P(X|Y) 代表事件 Y 发生后,X 事件发生的概率,也称为后验概率;

最大似然估计(英语:maximum likelihood estimation,缩写为MLE),是用来估计一个概率模型的参数的一种方法。

 

条件概率,就是在条件为瓜的颜色是青绿的情况下,瓜是好瓜的概率

先验概率,就是常识、经验、统计学所透露出的“因”的概率,即瓜的颜色是青绿的概率。

后验概率,就是在知道“果”之后,去推测“因”的概率,也就是说,如果已经知道瓜是好瓜,那么瓜的颜色是青绿的概率是多少。后验和先验的关系就需要运用贝叶斯决策理论来求解。

基于条件独立性假设,对于多个属性的后验概率可以写成:

d为属性数目,xi是x在第i个属性上取值。
对于所有的类别来说P(x)相同,基于极大似然的贝叶斯判定准则有朴素贝叶斯的表达式:

朴素贝叶斯算法实现: 

  1. #coding:utf-8
  2. #P(y|x) = [P(x|y)*P(y)]/P(x)
  3. import numpy as np
  4. import pandas as pd
  5. class Naive_Bayes:
  6. def __init__(self):
  7. pass
  8. # 朴素贝叶斯训练过程
  9. def nb_fit(self, X, y):
  10. # print('===y.columns[0]:', y.columns[0])
  11. classes = y[y.columns[0]].unique()
  12. # print('==classes:', classes)
  13. # print('==y[y.columns[0]]:', y[y.columns[0]])
  14. class_count = y[y.columns[0]].value_counts()
  15. # print('=class_count:', class_count)
  16. # 计算类先验概率
  17. class_prior = class_count / len(y)
  18. print('==class_prior:', class_prior)
  19. # 计算类条件概率
  20. prior = dict()
  21. #也就是求P(x1=?|y=?)
  22. for col in X.columns:
  23. for j in classes:
  24. # print('y:', y)
  25. # print('j:', j)
  26. # print('===X[(y == j).values]:', X[(y == j).values])
  27. # print('==X[(y == j).values][col]:', X[(y == j).values][col])
  28. p_x_y = X[(y == j).values][col].value_counts()
  29. # print('==p_x_y:', p_x_y)
  30. for i in p_x_y.index:
  31. # print('=i:', i)
  32. # print('==p_x_y[i]:', p_x_y[i])
  33. prior[(col, i, j)] = p_x_y[i] / class_count[j]
  34. # print(prior)
  35. # assert 1 == 0
  36. print('==prior:', prior)
  37. return classes, class_prior, prior
  38. # 预测新的实例
  39. def predict(self, X_test):
  40. #argmax(P(x1=?|y=?)*P(y=?))
  41. res = []
  42. for c in classes:
  43. p_y = class_prior[c]
  44. p_x_y = 1
  45. for i in X_test.items():
  46. # print('i:', i)
  47. # print(tuple(list(i) + [c]))
  48. p_x_y *= prior[tuple(list(i) + [c])]
  49. res.append(p_y * p_x_y)
  50. # print('===res:', res)
  51. return classes[np.argmax(res)]
  52. if __name__ == "__main__":
  53. x1 = [1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3]
  54. x2 = ['S', 'M', 'M', 'S', 'S', 'S', 'M', 'M', 'L', 'L', 'L', 'M', 'M', 'L', 'L']
  55. y = [-1, -1, 1, 1, -1, -1, -1, 1, 1, 1, 1, 1, 1, 1, -1]
  56. df = pd.DataFrame({'x1': x1, 'x2': x2, 'y': y})
  57. print('==df:\n', df)
  58. X = df[['x1', 'x2']]
  59. # print('==X:', X)
  60. y = df[['y']]
  61. # print('==y:', y)
  62. X_test = {'x1': 2, 'x2': 'S'}
  63. nb = Naive_Bayes()
  64. classes, class_prior, prior = nb.nb_fit(X, y)
  65. print('测试数据预测类别为:', nb.predict(X_test))

   

朴素贝叶斯分类器代码:

朴素贝叶斯分类器采用了“属性条件独立性假设”,对已知类别,假设所有属性相互独立。换言之,假设每个属性独立的对分类结果发生影响相互独立。

采用GaussianNB 高斯朴素贝叶斯,概率密度函数为

  1. import math
  2. class NaiveBayes:
  3. def __init__(self):
  4. self.model = None
  5. # 数学期望
  6. @staticmethod
  7. def mean(X):
  8. """计算均值
  9. Param: X : list or np.ndarray
  10. Return:
  11. avg : float
  12. """
  13. avg = 0.0
  14. # ========= show me your code ==================
  15. avg = sum(X) / float(len(X))
  16. # ========= show me your code ==================
  17. return avg
  18. # 标准差(方差)
  19. def stdev(self, X):
  20. """计算标准差
  21. Param: X : list or np.ndarray
  22. Return:
  23. res : float
  24. """
  25. res = 0.0
  26. avg = self.mean(X)
  27. res = math.sqrt(sum([pow(x - avg, 2) for x in X]) / float(len(X)))
  28. return res
  29. # 概率密度函数
  30. def gaussian_probability(self, x, mean, stdev):
  31. """根据均值和标注差计算x符号该高斯分布的概率
  32. Parameters:
  33. ----------
  34. x : 输入
  35. mean : 均值
  36. stdev : 标准差
  37. Return:
  38. res : float, x符合的概率值
  39. """
  40. res = 0.0
  41. # ========= show me your code ==================
  42. exponent = math.exp(-(math.pow(x - mean, 2) /
  43. (2 * math.pow(stdev, 2))))
  44. res = (1 / (math.sqrt(2 * math.pi) * stdev)) * exponent
  45. # ========= show me your code ==================
  46. return res
  47. # 处理X_train
  48. def summarize(self, train_data):
  49. """计算每个类目下对应数据的均值和标准差
  50. Param: train_data : list
  51. Return : [mean, stdev]
  52. """
  53. summaries = [0.0, 0.0]
  54. # ========= show me your code ==================
  55. # for i in zip(*train_data):
  56. # print(i)
  57. summaries = [(self.mean(i), self.stdev(i)) for i in zip(*train_data)]
  58. # ========= show me your code ==================
  59. return summaries
  60. # 分类别求出数学期望和标准差
  61. def fit(self, X, y):
  62. labels = list(set(y))
  63. data = {label: [] for label in labels}
  64. for f, label in zip(X, y):
  65. data[label].append(f)
  66. print('===data:', data)
  67. self.model = {
  68. label: self.summarize(value) for label, value in data.items()
  69. }
  70. print(self.model)#得到每一类的每个特征的均值和方差
  71. return 'gaussianNB train done!'
  72. # 计算概率
  73. def calculate_probabilities(self, input_data):
  74. """计算数据在各个高斯分布下的概率
  75. Paramter:
  76. input_data : 输入数据
  77. Return:
  78. probabilities : {label : p}
  79. """
  80. # summaries:{0.0: [(5.0, 0.37),(3.42, 0.40)], 1.0: [(5.8, 0.449),(2.7, 0.27)]}
  81. # input_data:[1.1, 2.2]
  82. probabilities = {}
  83. # ========= show me your code ==================
  84. for label, value in self.model.items():
  85. print('====label, value', label, value)
  86. print('==len(value)', len(value))
  87. probabilities[label] = 1
  88. for i in range(len(value)):
  89. mean, stdev = value[i]
  90. probabilities[label] *= self.gaussian_probability(
  91. input_data[i], mean, stdev)
  92. print('===probabilities:', probabilities)
  93. # ========= show me your code ==================
  94. return probabilities
  95. # 类别
  96. def predict(self, X_test):
  97. # {0.0: 2.9680340789325763e-27, 1.0: 3.5749783019849535e-26}
  98. label = sorted(self.calculate_probabilities(X_test).items(), key=lambda x: x[-1])[-1][0]
  99. return label
  100. # 计算得分
  101. def score(self, X_test, y_test):
  102. right = 0
  103. for X, y in zip(X_test, y_test):
  104. label = self.predict(X)
  105. if label == y:
  106. right += 1
  107. return right / float(len(X_test))
  108. def test_bayes_model():
  109. from sklearn.datasets import load_iris
  110. import pandas as pd
  111. from sklearn.model_selection import train_test_split
  112. iris = load_iris()
  113. X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2)
  114. print(len(X_train))
  115. print(len(y_train))
  116. model = NaiveBayes()
  117. model.fit(X_train, y_train)
  118. print(model.predict([4.4, 3.2, 1.3, 0.2]))
  119. if __name__ == '__main__':
  120. test_bayes_model()

基于pgmpy的贝叶斯网络例子:

pgmpy是一款基于Python的概率图模型包,主要包括贝叶斯网络和马尔可夫蒙特卡洛等常见概率图模型的实现以及推断方法.

下图是学生获得推荐信质量的例子。具体有向图和概率表如下图所示:

代码:

  1. #coding:utf-8
  2. #git clone https://github.com/pgmpy/pgmpy
  3. #cd pgmpy
  4. #python setup.py install
  5. from pgmpy.factors.discrete import TabularCPD
  6. from pgmpy.models import BayesianModel
  7. student_model = BayesianModel([('D', 'G'),
  8. ('I', 'G'),
  9. ('G', 'L'),
  10. ('I', 'S')])
  11. #分数节点
  12. grade_cpd = TabularCPD(
  13. variable='G',# 节点名称
  14. variable_card=3,# 节点取值个数
  15. values=[[0.3, 0.05, 0.9, 0.5],# 该节点的概率表
  16. [0.4, 0.25, 0.08, 0.3],
  17. [0.3, 0.7, 0.02, 0.2]],
  18. evidence=['I', 'D'], # 该节点的依赖节点
  19. evidence_card=[2, 2] # 依赖节点的取值个数
  20. )
  21. #考试难度节点
  22. difficulty_cpd = TabularCPD(
  23. variable='D',
  24. variable_card=2,
  25. values=[[0.6, 0.4]]
  26. )
  27. ##智商节点
  28. intel_cpd = TabularCPD(
  29. variable='I',
  30. variable_card=2,
  31. values=[[0.7, 0.3]]
  32. )
  33. #收到推荐信节点
  34. letter_cpd = TabularCPD(
  35. variable='L',
  36. variable_card=2,
  37. values=[[0.1, 0.4, 0.99],
  38. [0.9, 0.6, 0.01]],
  39. evidence=['G'],
  40. evidence_card=[3]
  41. )
  42. #sat分数节点
  43. sat_cpd = TabularCPD(
  44. variable='S',
  45. variable_card=2,
  46. values=[[0.95, 0.2],
  47. [0.05, 0.8]],
  48. evidence=['I'],
  49. evidence_card=[2]
  50. )
  51. student_model.add_cpds(
  52. grade_cpd,
  53. difficulty_cpd,
  54. intel_cpd,
  55. letter_cpd,
  56. sat_cpd
  57. )
  58. print(student_model.get_cpds())
  59. print('D节点路径:', student_model.active_trail_nodes('D'))
  60. print('I节点路径:', student_model.active_trail_nodes('I'))
  61. print(student_model.local_independencies('G'))
  62. # print(student_model.get_independencies())
  63. # print(student_model.to_markov_model())
  64. # 进行贝叶斯推断
  65. from pgmpy.inference import VariableElimination
  66. student_infer = VariableElimination(student_model)
  67. prob_G = student_infer.query(variables=['G'])
  68. print('所有可能性的分数概率prob_G:', prob_G)
  69. prob_G = student_infer.query(
  70. variables=['G'],
  71. evidence={'I': 1, 'D': 0})
  72. print('聪明学生的分数概率prob_G', prob_G)
  73. # prob_G = student_infer.query(
  74. # variables=['G'],
  75. # evidence={'I': 0, 'D': 1})
  76. # print(prob_G)
  77. # # 生成数据
  78. # import numpy as np
  79. # import pandas as pd
  80. #
  81. # raw_data = np.random.randint(low=0, high=2, size=(1000, 5))
  82. # data = pd.DataFrame(raw_data, columns=['D', 'I', 'G', 'L', 'S'])
  83. # data.head()
  84. #
  85. #
  86. # # 定义模型
  87. # from pgmpy.models import BayesianModel
  88. # from pgmpy.estimators import MaximumLikelihoodEstimator, BayesianEstimator
  89. #
  90. # model = BayesianModel([('D', 'G'), ('I', 'G'), ('I', 'S'), ('G', 'L')])
  91. #
  92. # # 基于极大似然估计进行模型训练
  93. # model.fit(data, estimator=MaximumLikelihoodEstimator)
  94. # for cpd in model.get_cpds():
  95. # # 打印条件概率分布
  96. # print("CPD of {variable}:".format(variable=cpd.variable))
  97. # print(cpd)

二.机器学习

knn的详细链接:https://blog.csdn.net/fanzonghao/article/details/86411102

决策树的详细链接:https://blog.csdn.net/fanzonghao/article/details/85246720

1.SVM:寻找最优的间隔

等式约束的最优解

不等式约束的最优解:利用kkT条件

最终得到分类器:

也就是C(松弛变量)越大:得到高方差,低偏差的模型;更倾向于过拟合;

C越小:得到低方差,高偏差的模型;更倾向于欠拟合。

推导:

SVM案例,应用SMO算法:

  1. import numpy as np
  2. import pandas as pd
  3. from sklearn.datasets import load_iris
  4. from sklearn.model_selection import train_test_split
  5. import matplotlib.pyplot as plt
  6. def create_data():
  7. iris = load_iris()
  8. df = pd.DataFrame(iris.data, columns=iris.feature_names)
  9. df['label'] = iris.target
  10. df.columns = [
  11. 'sepal length', 'sepal width', 'petal length', 'petal width', 'label'
  12. ]
  13. data = np.array(df.iloc[:100, [0, 1, -1]])
  14. for i in range(len(data)):
  15. if data[i, -1] == 0:
  16. data[i, -1] = -1
  17. # print(data)
  18. return data[:, :2], data[:, -1]
  19. X, y = create_data()
  20. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
  21. print('==X_train.shape:', X_train.shape)
  22. print('==y_train.shape:', y_train.shape)
  23. plt.scatter(X[:50, 0], X[:50, 1], label='0', color='R')
  24. plt.scatter(X[50:, 0], X[50:, 1], label='1', color='G')
  25. plt.legend()
  26. # plt.show()
  27. #w = alpha*y*x
  28. class SVM:
  29. def __init__(self, max_iter=100, kernel='linear'):
  30. self.max_iter = max_iter
  31. self._kernel = kernel
  32. def init_args(self, features, labels):
  33. self.m, self.n = features.shape#m数据量 n特征维度
  34. self.X = features
  35. self.Y = labels
  36. self.b = 0.0
  37. # 将Ei保存在一个列表里
  38. self.alpha = np.ones(self.m)
  39. self.E = [self._E(i) for i in range(self.m)]
  40. # 松弛变量
  41. self.C = 1.0
  42. def _KKT(self, i):
  43. y_g = self._g(i) * self.Y[i]
  44. if self.alpha[i] == 0:
  45. return y_g >= 1
  46. elif 0 < self.alpha[i] < self.C:
  47. return y_g == 1
  48. else:
  49. return y_g <= 1
  50. # g(x)预测值,输入xi(X[i])
  51. def _g(self, i):
  52. r = self.b
  53. for j in range(self.m):
  54. r += self.alpha[j] * self.Y[j] * self.kernel(self.X[i], self.X[j])
  55. return r
  56. # E(x)为g(x)对输入x的预测值和y的差
  57. def _E(self, i):
  58. return self._g(i) - self.Y[i]
  59. # 核函数
  60. def kernel(self, x1, x2):
  61. if self._kernel == 'linear':
  62. return sum([x1[k] * x2[k] for k in range(self.n)])
  63. elif self._kernel == 'poly':
  64. return (sum([x1[k] * x2[k] for k in range(self.n)]) + 1)**2
  65. return 0
  66. def _init_alpha(self):
  67. # 外层循环首先遍历所有满足0<a<C的样本点,检验是否满足KKT
  68. index_list = [i for i in range(self.m) if 0 < self.alpha[i] < self.C]
  69. # 否则遍历整个训练集
  70. non_satisfy_list = [i for i in range(self.m) if i not in index_list]
  71. index_list.extend(non_satisfy_list)
  72. for i in index_list:
  73. if self._KKT(i):
  74. continue
  75. E1 = self.E[i]
  76. # 如果E2+,选择最小的;如果E2是负的,选择最大的
  77. if E1 >= 0:
  78. j = min(range(self.m), key=lambda x: self.E[x])
  79. else:
  80. j = max(range(self.m), key=lambda x: self.E[x])
  81. return i, j
  82. def _compare(self, _alpha, L, H):
  83. if _alpha > H:
  84. return H
  85. elif _alpha < L:
  86. return L
  87. else:
  88. return _alpha
  89. def fit(self, features, labels):
  90. self.init_args(features, labels)
  91. for t in range(self.max_iter):
  92. # train
  93. i1, i2 = self._init_alpha()
  94. # 边界
  95. if self.Y[i1] == self.Y[i2]:
  96. L = max(0, self.alpha[i1] + self.alpha[i2] - self.C)
  97. H = min(self.C, self.alpha[i1] + self.alpha[i2])
  98. else:
  99. L = max(0, self.alpha[i2] - self.alpha[i1])
  100. H = min(self.C, self.C + self.alpha[i2] - self.alpha[i1])
  101. E1 = self.E[i1]
  102. E2 = self.E[i2]
  103. # eta=K11+K22-2K12
  104. eta = self.kernel(self.X[i1], self.X[i1]) + self.kernel(
  105. self.X[i2],
  106. self.X[i2]) - 2 * self.kernel(self.X[i1], self.X[i2])
  107. if eta <= 0:
  108. # print('eta <= 0')
  109. continue
  110. alpha2_new_unc = self.alpha[i2] + self.Y[i2] * (
  111. E1 - E2) / eta #此处有修改,根据书上应该是E1 - E2,书上130-131
  112. alpha2_new = self._compare(alpha2_new_unc, L, H)
  113. alpha1_new = self.alpha[i1] + self.Y[i1] * self.Y[i2] * (
  114. self.alpha[i2] - alpha2_new)
  115. b1_new = -E1 - self.Y[i1] * self.kernel(self.X[i1], self.X[i1]) * (
  116. alpha1_new - self.alpha[i1]) - self.Y[i2] * self.kernel(
  117. self.X[i2],
  118. self.X[i1]) * (alpha2_new - self.alpha[i2]) + self.b
  119. b2_new = -E2 - self.Y[i1] * self.kernel(self.X[i1], self.X[i2]) * (
  120. alpha1_new - self.alpha[i1]) - self.Y[i2] * self.kernel(
  121. self.X[i2],
  122. self.X[i2]) * (alpha2_new - self.alpha[i2]) + self.b
  123. if 0 < alpha1_new < self.C:
  124. b_new = b1_new
  125. elif 0 < alpha2_new < self.C:
  126. b_new = b2_new
  127. else:
  128. # 选择中点
  129. b_new = (b1_new + b2_new) / 2
  130. # 更新参数
  131. self.alpha[i1] = alpha1_new
  132. self.alpha[i2] = alpha2_new
  133. self.b = b_new
  134. self.E[i1] = self._E(i1)
  135. self.E[i2] = self._E(i2)
  136. return 'train done!'
  137. def predict(self, data):
  138. r = self.b
  139. for i in range(self.m):
  140. r += self.alpha[i] * self.Y[i] * self.kernel(data, self.X[i])
  141. return 1 if r > 0 else -1
  142. def score(self, X_test, y_test):
  143. right_count = 0
  144. for i in range(len(X_test)):
  145. result = self.predict(X_test[i])
  146. if result == y_test[i]:
  147. right_count += 1
  148. return right_count / len(X_test)
  149. # def _weight(self):
  150. # # linear model
  151. # yx = self.Y.reshape(-1, 1) * self.X
  152. # self.w = np.dot(yx.T, self.alpha)
  153. # return self.w
  154. svm = SVM(max_iter=200)
  155. svm.fit(X_train, y_train)
  156. score = svm.score(X_test, y_test)
  157. print('===score:', score)

SVM案例,用于水果数据集分类,调用scikit-learn:

  1. import numpy as np
  2. import matplotlib.pyplot as plt
  3. import pandas as pd
  4. import seaborn as sns
  5. from sklearn.model_selection import train_test_split
  6. from sklearn.metrics import accuracy_score
  7. from sklearn.svm import SVC
  8. import matplotlib.patches as mpatches
  9. from matplotlib.colors import ListedColormap
  10. def plot_class_regions_for_classifier(clf, X, y, X_test=None, y_test=None, title=None,
  11. target_names=None, plot_decision_regions=True):
  12. """
  13. 根据分类器可视化数据分类的结果
  14. 只能用于二维特征的数据
  15. """
  16. num_classes = np.amax(y) + 1
  17. color_list_light = ['#FFFFAA', '#EFEFEF', '#AAFFAA', '#AAAAFF']
  18. color_list_bold = ['#EEEE00', '#000000', '#00CC00', '#0000CC']
  19. cmap_light = ListedColormap(color_list_light[0:num_classes])
  20. cmap_bold = ListedColormap(color_list_bold[0:num_classes])
  21. h = 0.03
  22. k = 0.5
  23. x_plot_adjust = 0.1
  24. y_plot_adjust = 0.1
  25. plot_symbol_size = 50
  26. x_min = X[:, 0].min()
  27. x_max = X[:, 0].max()
  28. y_min = X[:, 1].min()
  29. y_max = X[:, 1].max()
  30. x2, y2 = np.meshgrid(np.arange(x_min-k, x_max+k, h), np.arange(y_min-k, y_max+k, h))
  31. P = clf.predict(np.c_[x2.ravel(), y2.ravel()])
  32. P = P.reshape(x2.shape)
  33. plt.figure()
  34. if plot_decision_regions:
  35. plt.contourf(x2, y2, P, cmap=cmap_light, alpha=0.8)
  36. plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, s=plot_symbol_size, edgecolor='black')
  37. plt.xlim(x_min - x_plot_adjust, x_max + x_plot_adjust)
  38. plt.ylim(y_min - y_plot_adjust, y_max + y_plot_adjust)
  39. if X_test is not None:
  40. plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cmap_bold, s=plot_symbol_size,
  41. marker='^', edgecolor='black')
  42. train_score = clf.score(X, y)
  43. test_score = clf.score(X_test, y_test)
  44. title = title + "\nTrain score = {:.2f}, Test score = {:.2f}".format(train_score, test_score)
  45. if target_names is not None:
  46. legend_handles = []
  47. for i in range(0, len(target_names)):
  48. patch = mpatches.Patch(color=color_list_bold[i], label=target_names[i])
  49. legend_handles.append(patch)
  50. plt.legend(loc=0, handles=legend_handles)
  51. if title is not None:
  52. plt.title(title)
  53. plt.show()
  54. # 加载数据集
  55. fruits_df = pd.read_table('fruit_data_with_colors.txt')
  56. X = fruits_df[['width', 'height']]
  57. y = fruits_df['fruit_label'].copy()
  58. # 将不是apple的标签设为0
  59. y[y != 1] = 0
  60. # 分割数据集
  61. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1/4, random_state=0)
  62. print(y_test.shape)
  63. # 不同的C值
  64. c_values = [0.0001, 1, 100]
  65. for c_value in c_values:
  66. # 建立模型
  67. svm_model = SVC(C=c_value, kernel='rbf')
  68. # 训练模型
  69. svm_model.fit(X_train, y_train)
  70. # 验证模型
  71. y_pred = svm_model.predict(X_test)
  72. acc = accuracy_score(y_test, y_pred)
  73. print('C={},准确率:{:.3f}'.format(c_value, acc))
  74. # 可视化
  75. plot_class_regions_for_classifier(svm_model, X_test.values, y_test.values, title='C={}'.format(c_value))

二维高斯分布 

将kernel替换成‘linear’

2.集成学习

  1. def load_data():
  2. # 加载数据集
  3. fruits_df = pd.read_table('fruit_data_with_colors.txt')
  4. # print(fruits_df)
  5. print('样本个数:', len(fruits_df))
  6. # 创建目标标签和名称的字典
  7. fruit_name_dict = dict(zip(fruits_df['fruit_label'], fruits_df['fruit_name']))
  8. # 划分数据集
  9. X = fruits_df[['mass', 'width', 'height', 'color_score']]
  10. y = fruits_df['fruit_label']
  11. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1/4, random_state=0)
  12. print('数据集样本数:{},训练集样本数:{},测试集样本数:{}'.format(len(X), len(X_train), len(X_test)))
  13. # print(X_train)
  14. return X_train, X_test, y_train, y_test
  15. #特征归一化
  16. def minmax_scaler(X_train,X_test):
  17. scaler = MinMaxScaler()
  18. X_train_scaled = scaler.fit_transform(X_train)
  19. # print(X_train_scaled)
  20. #此时scaled得到一个最小最大值,对于test直接transform就行
  21. X_test_scaled = scaler.transform(X_test)
  22. for i in range(4):
  23. print('归一化前,训练数据第{}维特征最大值:{:.3f},最小值:{:.3f}'.format(i + 1,
  24. X_train.iloc[:, i].max(),
  25. X_train.iloc[:, i].min()))
  26. print('归一化后,训练数据第{}维特征最大值:{:.3f},最小值:{:.3f}'.format(i + 1,
  27. X_train_scaled[:, i].max(),
  28. X_train_scaled[:, i].min()))
  29. return X_train_scaled,X_test_scaled
  1. def stack(X_train_scaled, y_train,X_test_scaled, y_test):
  2. from sklearn.linear_model import LogisticRegression
  3. from sklearn.neighbors import KNeighborsClassifier
  4. from sklearn.tree import DecisionTreeClassifier
  5. from sklearn.svm import SVC
  6. from mlxtend.classifier import StackingClassifier
  7. clf1 = KNeighborsClassifier(n_neighbors=1)
  8. clf2 = SVC(kernel='linear')
  9. clf3 = DecisionTreeClassifier()
  10. lr = LogisticRegression(C=100)
  11. sclf = StackingClassifier(classifiers=[clf1, clf2, clf3],
  12. meta_classifier=lr)
  13. clf1.fit(X_train_scaled, y_train)
  14. clf2.fit(X_train_scaled, y_train)
  15. clf3.fit(X_train_scaled, y_train)
  16. sclf.fit(X_train_scaled, y_train)
  17. print('kNN测试集准确率:{:.3f}'.format(clf1.score(X_test_scaled, y_test)))
  18. print('SVM测试集准确率:{:.3f}'.format(clf2.score(X_test_scaled, y_test)))
  19. print('DT测试集准确率:{:.3f}'.format(clf3.score(X_test_scaled, y_test)))
  20. print('Stacking测试集准确率:{:.3f}'.format(sclf.score(X_test_scaled, y_test)))
  1. if __name__ == '__main__':
  2. X_train, X_test, y_train, y_test=load_data()
  3. X_train_scaled,X_test_scaled=minmax_scaler(X_train,X_test)

2.1Boosting

  • Boosting(提升)方法从某个基学习器出发,反复学习,得到一系列基学习器,然后组合它们构成一个强学习器。
  • Boosting 基于串行策略:基学习器之间存在依赖关系,新的学习器需要依据旧的学习器生成。
  • 代表算法/模型
  • 提升方法 AdaBoost
  • 提升树
  • 梯度提升树 GBDT

2.1.1Adaboost

2.1.2 GBDT

  1. def gbdt(X_train_scaled, y_train, X_test_scaled, y_test):
  2. from sklearn.ensemble import GradientBoostingClassifier
  3. from sklearn.model_selection import GridSearchCV
  4. parameters = {'learning_rate': [0.001, 0.01, 0.1, 1, 10, 100]}
  5. clf = GridSearchCV(GradientBoostingClassifier(), parameters, cv=3, scoring='accuracy')
  6. clf.fit(X_train_scaled, y_train)
  7. print('最优参数:', clf.best_params_)
  8. print('验证集最高得分:', clf.best_score_)
  9. print('测试集准确率:{:.3f}'.format(clf.score(X_test_scaled, y_test)))

 

2.2 Bagging

  • Bagging 基于并行策略:基学习器之间不存在依赖关系,可同时生成。
  • 代表算法/模型
  1. import warnings
  2. import matplotlib.pyplot as plt
  3. from sklearn.datasets import make_circles
  4. from sklearn.model_selection import train_test_split
  5. from sklearn.neighbors import KNeighborsClassifier
  6. from sklearn.linear_model import LogisticRegression
  7. from sklearn.svm import SVC
  8. from sklearn.tree import DecisionTreeClassifier
  9. from sklearn.ensemble import VotingClassifier,RandomForestClassifier,ExtraTreesClassifier
  10. from sklearn.ensemble import AdaBoostClassifier
  11. warnings.filterwarnings('ignore')
  12. X,y=make_circles(n_samples=300,noise=0.15,factor=0.5,random_state=233)
  13. plt.scatter(X[y==0,0],X[y==0,1])
  14. plt.scatter(X[y== 1, 0], X[y== 1, 1])
  15. # plt.show()
  16. X_train,X_test,y_train,y_test=train_test_split(X,y)
  17. print('X_train.shape=',X_train.shape)
  18. print('X_test.shape=',X_test.shape)
  19. print(y_test)
  20. print('===========knn==============')
  21. knn_clf=KNeighborsClassifier()
  22. knn_clf.fit(X_train,y_train)
  23. print('knn accuracy={}'.format(knn_clf.score(X_test,y_test)))
  24. print('\n')
  25. print('===========logistic regression==============')
  26. log_clf = LogisticRegression()
  27. log_clf.fit(X_train, y_train)
  28. print('logistic regression accuracy={}'.format(log_clf.score(X_test, y_test)))
  29. print('\n')
  30. print('===========SVM==============')
  31. svm_clf = SVC()
  32. svm_clf.fit(X_train, y_train)
  33. print('SVM accuracy={}'.format(svm_clf.score(X_test, y_test)))
  34. print('\n')
  35. print('===========Decison tree==============')
  36. dt_clf = DecisionTreeClassifier()
  37. dt_clf.fit(X_train, y_train)
  38. print('Decison tree accuracy={}'.format(dt_clf.score(X_test, y_test)))
  39. print('\n')
  40. print('===========ensemble classfier==============')
  41. voting_clf=VotingClassifier(estimators=[('knn',KNeighborsClassifier()),
  42. ('logistic', LogisticRegression()),
  43. ('SVM',SVC()),
  44. ('decision tree',DecisionTreeClassifier())],
  45. voting='hard')#严格遵守少数服从多数
  46. voting_clf.fit(X_train,y_train)
  47. print('voting classfier accuracy={}'.format(voting_clf.score(X_test, y_test)))
  48. print('\n')
  49. print('===========random forest==============')
  50. rf_clf=RandomForestClassifier(n_estimators=500,#500棵树
  51. max_depth=6,#每颗树的深度
  52. bootstrap=True,# 放回抽样
  53. oob_score=True,#使用没有被抽到的数据做验证
  54. )
  55. rf_clf.fit(X,y)#由于oob_score为true 故直接fit整个训练集
  56. print('rf accuracy={}'.format(rf_clf.oob_score_))
  57. print('\n')
  58. print('===========extreme random tree==============')
  59. ex_clf=ExtraTreesClassifier(n_estimators=500,
  60. max_depth=6,
  61. bootstrap=True,
  62. oob_score=True)
  63. ex_clf.fit(X,y)
  64. print('extreme random treeaccuracy={}'.format(ex_clf.oob_score_))
  65. print('\n')
  66. print('===========Adaboost classifier==============')
  67. ada_clf = AdaBoostClassifier(DecisionTreeClassifier(),
  68. n_estimators=500,
  69. learning_rate=0.3)
  70. ada_clf.fit(X_train, y_train)
  71. print('Adaboost accuracy={}'.format(ada_clf.score(X_test,y_test)))
  72. print('\n')

 

    随机森林算法的高明之处之一就是利用随机性,使得模型更鲁棒。假如森林中有 N 棵树,那么就随机取出 N 个训练数据集,对 N 棵树分别进行训练,通过统计每棵树的预测结果来得出随机森林的预测结果。 

    因为随机森林的主要构件是决策树,所以随机森林的超参数很多与决策树相同。除此之外,有2个比较重要的超参数值得注意,一个是 bootstrap,取 true 和 false,表示在划分训练数据集时是否采用放回取样;另一个是 oob_score,因为采用放回取样时,构建完整的随机森林之后会有大约 33% 的数据没有被取到过,所以当 oob_score 取 True 时,就不必再将数据集划分为训练集和测试集了,直接取未使用过的数据来验证模型的准确率。

    由上述可以看出Extremely Randomized Trees 算法精度最高,它不仅在构建数据子集时对样本的选择进行随机抽取,而且还会对样本的特征进行随机抽取(即在建树模型时,采用部分特征而不是全部特征进行训练)。换句话说,就是对于特征集 X,随机森林只是在行上随机,Extremely Randomized Trees是在行和列上都随机。

Boosting/Bagging 与 偏差/方差 的关系

  • 简单来说,Boosting 能提升弱分类器性能的原因是降低了偏差Bagging 则是降低了方差
  • Boosting 方法:
    • Boosting 的基本思路就是在不断减小模型的训练误差(拟合残差或者加大错类的权重),加强模型的学习能力,从而减小偏差;
    • 但 Boosting 不会显著降低方差,因为其训练过程中各基学习器是强相关的,缺少独立性。
  • Bagging 方法:
    • 对 n 个独立不相关的模型预测结果取平均,方差是原来的 1/n
    • 假设所有基分类器出错的概率是独立的,超过半数基分类器出错的概率会随着基分类器的数量增加而下降。
  • 泛化误差、偏差、方差、过拟合、欠拟合、模型复杂度(模型容量)的关系图:

 

参考:

https://gitee.com/zonghaofan/team-learning/blob/master/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E7%AE%97%E6%B3%95%E5%9F%BA%E7%A1%80/Task2%20bayes_plus.ipynb

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/笔触狂放9/article/detail/98545
推荐阅读
相关标签
  

闽ICP备14008679号