赞
踩
机器学习时,经常会出现过拟合的问题,本文介绍了几种解决方法
过拟合是指模型参数对于训练数据集的特定观测值拟合得非常接近,但是训练数据集的分布与真实数据不一致,因此,训练出来的模型方差较高,泛化误差大,泛化能力差
常见的几种过拟合解决方法
- 收集更多的数据
- 通过正则化引入罚项
- 选择一个参数相对较少的简单模型
- 降低数据的维度
过拟合问题产生的原因,直观来说就是某些维度上的权重系数太”偏激”了,正则化通过添加罚项,使得模型的偏差增大,方差减小,使得权重系数均向0趋近,越是”偏激”的权重系数就越是被打压,一定程度上解决了过拟合问题。
如下图所示,当C的值(也就是罚项的倒数)减小时,也就是当罚项增大时,权重系数都在往0逼近
L1和L2的异同点
相同点:都用于避免过拟合
不同点:
(1) L1可以让一部分特征的系数缩小到0,从而间接实现特征选择。所以L1适用于特征之间有关联的情况。
(2) L2让所有特征的系数都缩小,但是不会减为0,它会使优化求解稳定快速。所以L2适用于特征之间没有关联的情况
SBS是一个不断删除特征,并通过价值函数进行贪心的特征选择算法,用来降低特征的维度
随机森林可以帮助我们获得特征的重要性等级,从而可以选择出最重要的那几个特征
使用KNN进行评估
可以看到在使用6以上的维度时出现了过拟合
决策树个数从10000变为1000后,发现还更好了些,woc
使用SVM的训练结果
从走势上看,发现其实数据本身是没有过拟合的
# 使用不同的方法解决过拟合问题
# 1. L1正则
# 2. SBS
# 3. RandomForest
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split # 用于分离训练集和验证集
from sklearn.preprocessing import MinMaxScaler # Min-Max归一化
from sklearn.preprocessing import StandardScaler # 标准化
from sklearn.metrics import accuracy_score # 计算分类准确率
from sklearn.svm import SVC # 支持向量机
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from Chapter4.RatingFeature_SBS import SBS
df_wine = pd.read_csv('./Data/UCI/wine.data')
df_wine.columns = [
'Class label', # 注意:第一行是ClassLabel, 需要额外添加这个column
'Alcohol',
'Malic acid',
'Ash',
'Alcalinity of ash',
'Magnesium',
'Total phenols',
'Flavanoids',
'Nonflavanoid phenols',
'Proanthocyanins',
'Color intensity',
'Hue',
'OD280/OD315 of diluted wines',
'Proline '
]
# print('Class labels', np.unique(df_wine['Class label']))
# print(df_wine.head())
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
# X, y = df_wine.values[:, 1:], df_wine.values[:, 0] 与上面这一句等价
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
# 对数据进行Min-Max归一化
mms = MinMaxScaler()
X_train_norm = mms.fit_transform(X_train)
x_test_norm = mms.transform(X_test)
# 对数据进行标准化
stdsc = StandardScaler()
X_train_std = stdsc.fit_transform(X_train)
X_test_std = stdsc.transform(X_test)
def solve_by_SBS(num_of_features):
knn = KNeighborsClassifier(n_neighbors=2)
sbs = SBS(knn, k_features=1)
sbs.fit(X_train_std, y_train)
# 查看当使用n个特征的时候,哪n个特征最有用,SBS是一个接一个的删掉最不好的特征
n = num_of_features
total = df_wine.shape[1]-1
k_n = list(sbs.subsets_[total-n])
return k_n
def solve_by_RandomForest(num_of_features):
forest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)
forest.fit(X_train, y_train)
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1]
# 返回最优的n个特征的索引
return indices[:num_of_features]
def train_by_KNN(X_train_std, y_train, X_test_std, y_test):
knn = KNeighborsClassifier(n_neighbors=2, p=2, metric='minkowski')
knn.fit(X_train_std, y_train)
y_prediction = knn.predict(X_test_std)
print('Misclassified samples: %d' % (y_test != y_prediction).sum())
print('Accuracy: %f' % accuracy_score(y_test, y_prediction))
return accuracy_score(y_test, y_prediction)
def train_by_SVM(X_train_std, y_train, X_test_std, y_test):
svm = SVC(kernel='linear', C=1.0, random_state=0)
svm.fit(X_train_std, y_train)
y_prediction = svm.predict(X_test_std)
print('Misclassified samples: %d' % (y_test != y_prediction).sum())
print('Accuracy: %f' % accuracy_score(y_test, y_prediction))
# 使用SBS方法从1开始查找最优的特征
accuracies = []
for n in range(1, X_train.shape[1]):
print('Feature Number', n ,': ')
index = solve_by_SBS(num_of_features=n)
best_train_std = X_train_std[:, index]
best_test_std = X_test_std[:, index]
accuracy = train_by_KNN(best_train_std, y_train, best_test_std, y_test)
accuracies.append(accuracy)
plt.plot(range(len(accuracies)), accuracies, label='SBS', marker='o', color='blue')
# 使用RF方法从1开始查找最优的特征
accuracies = []
for n in range(1, X_train.shape[1]):
print('Feature Number', n, ': ')
index = solve_by_RandomForest(num_of_features=n)
best_train_std = X_train_std[:, index]
best_test_std = X_test_std[:, index]
accuracy = train_by_KNN(best_train_std, y_train, best_test_std, y_test)
accuracies.append(accuracy)
plt.plot(range(len(accuracies)), accuracies, label='RandomForest', marker='x', color='red')
plt.xlabel('Number of Best-Choosed Features')
plt.ylabel('Accuracy')
plt.xlim([-1, len(accuracies)+1])
plt.ylim([0, 1.2])
plt.legend(loc='upper left')
plt.show()
# lr = LogisticRegression(penalty='l1', C=0.1) # 感觉惩罚系数没必要这么大,C是惩罚系数的倒数
# lr.fit(X_train_std, y_train)
# print('Training accuracy:', lr.score(X_train_std, y_train))
# print('Test accuracy:', lr.score(X_test_std, y_test))
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。