赞
踩
默认使用均方误差 mse
# 训练决策树回归模型 默认使用均方误差 mse from sklearn.tree import DecisionTreeRegressor from sklearn import datasets boston = datasets.load_boston() features = boston.data[:,0:2] target = boston.target decisiontree = DecisionTreeRegressor(random_state=0) model = decisiontree.fit(features, target) observation = [[0.02, 16]] model.predict(observation) array([33.]) Discussion Decision tree regression works similarly to decision tree classification, however instead of reducing Gini impurity or entropy, potential splits are by default measure on how much they reduce mean squared error (MSE): MSE=1n∑i=1n(yi−ŷ i)2 MSE=1n∑i=1n(yi−y^i)2 where yiyi is the true value of the target and ŷ iy^i is the predicted value. We can use the criterion parameter to select the desired measurement of split quality. For example we can construct a tree whose splits reduce mean absolute error: # 也可以用mae 绝对误差 来作为分裂标准 decisiontree_mae = DecisionTreeRegressor(criterion="mae", random_state=0) model_mae = decisiontree_mae.fit(features, target)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。