赞
踩
除了我们平常所做的网格搜索,随机搜索外,我发现贝叶斯优化的方法挺不错,然后我就尝试了一下,发现效果挺好的,我这里把我的代码分享出来:
贝叶斯优化通过基于目标函数的过去评估结果建立替代函数(概率模型),来找到最小化目标函数的值。贝叶斯方法与随机或网格搜索的不同之处在于,它在尝试下一组超参数时,会参考之前的评估结果,因此可以省去很多无用功。
超参数的评估代价很大,因为它要求使用待评估的超参数训练一遍模型,而许多深度学习模型动则几个小时几天才能完成训练,并评估模型,因此耗费巨大。贝叶斯调参发使用不断更新的概率模型,通过推断过去的结果来“集中”有希望的超参数。
from skopt import BayesSearchCV
import xgboost as xgb
from sklearn.model_selection import train_test_split
import pandas as pd
from sklearn.model_selection import StratifiedKFold
import numpy as np
from sklearn.utils import shuffle
train_path='ads_train.csv'
train_data=pd.read_csv(train_path)
train_data = shuffle(train_data)
X=train_data[['isbuyer', 'buy_freq', 'visit_freq', 'buy_interval',
'sv_interval', 'expected_time_buy', 'expected_time_visit', 'last_buy', 'multiple_buy', 'multiple_visit', 'uniq_urls',
'num_checkins']]
Y=train_data[['y_buy']]
X_train,X_test,y_train,y_test=train_test_split(X,Y,test_size=0.2)
ITERATIONS=100 # Classifier bayes_cv_tuner = BayesSearchCV( estimator = xgb.XGBClassifier( n_jobs = 1, objective = 'binary:logistic', eval_metric = 'auc', silent=1, tree_method='approx' ), search_spaces = { 'learning_rate': (0.01, 1.0, 'log-uniform'), 'min_child_weight': (0, 10), 'max_depth': (0, 50), 'max_delta_step': (0, 20), 'subsample': (0.01, 1.0, 'uniform'), 'colsample_bytree': (0.01, 1.0, 'uniform'), 'colsample_bylevel': (0.01, 1.0, 'uniform'), 'reg_lambda': (1e-9, 1000, 'log-uniform'), 'reg_alpha': (1e-9, 1.0, 'log-uniform'), 'gamma': (1e-9, 0.5, 'log-uniform'), 'min_child_weight': (0, 5), 'n_estimators': (50, 100), 'scale_pos_weight': (1e-6, 500, 'log-uniform') }, scoring = 'roc_auc', cv = StratifiedKFold( n_splits=5, shuffle=True, random_state=42 ), n_jobs = 6, n_iter = ITERATIONS, verbose = 0, refit = True, random_state = 42 ) def status_print(optim_result): """Status callback durring bayesian hyperparameter search""" # Get all the models tested so far in DataFrame format all_models = pd.DataFrame(bayes_cv_tuner.cv_results_) # Get current parameters and the best parameters best_params = pd.Series(bayes_cv_tuner.best_params_) print('Model #{}\nBest ROC-AUC: {}\nBest params: {}\n'.format( len(all_models), np.round(bayes_cv_tuner.best_score_, 4), bayes_cv_tuner.best_params_ )) print(dict(bayes_cv_tuner.best_params_)) # Save all model results clf_name = bayes_cv_tuner.estimator.__class__.__name__ all_models.to_csv(clf_name+"_cv_results.csv") result = bayes_cv_tuner.fit(X.values, Y.values, callback=status_print)
Bayesian hyperparameter tuning of xgBoost
自动机器学习超参数调整(贝叶斯优化)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。