当前位置:   article > 正文

【AI实战】xgb.XGBRegressor之多回归MultiOutputRegressor调参2(GPU训练模型)_multioutputregressor' object has no attribute 'bes

multioutputregressor' object has no attribute 'best_params_'.

xgb.XGBRegressor之多回归MultiOutputRegressor调参2(GPU训练模型)

  • 环境

    • Ubuntu18.04
    • python3.6.9
    • TensorFlow 2.4.2
    • cuda 11.0
    • xgboost 1.5.2
  • 依赖库

    import pandas as pd
    from sklearn.model_selection import train_test_split
    from sklearn.model_selection import GridSearchCV #网格搜索
    from sklearn.metrics import make_scorer
    from sklearn.metrics import r2_score
    from sklearn.ensemble import GradientBoostingRegressor
    from sklearn.multioutput import MultiOutputRegressor
    import xgboost as xgb
    import joblib
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
  • 调参核心代码

    def tune_parameter(train_data_path, test_data_path, n_input, n_output, version):
        # 模型调参
        
        x,y = load_data(version, 'train', train_data_path, n_input, n_output)
        train_x,test_x,train_y,test_y = train_test_split(x,y,test_size=0.2,random_state=2022)
        
        gsc = GridSearchCV(
                estimator=xgb.XGBRegressor(seed=42,
                                 tree_method='gpu_hist',
                                 gpu_id=3),
                param_grid={"learning_rate": [0.05, 0.10, 0.15],
                            "n_estimators":[400, 500, 600, 700],
                            "max_depth": [ 3, 5, 7],
                            "min_child_weight": [ 1, 3, 5, 7],
                            "gamma":[ 0.0, 0.1, 0.2],
                            "colsample_bytree":[0.7, 0.8, 0.9],
                            "subsample":[0.7, 0.8, 0.9],
                            },
                            cv=3, scoring='neg_mean_squared_error', verbose=0, n_jobs=4)
    
        grid_result = MultiOutputRegressor(gsc).fit(train_x, train_y)
    
        #best_params = grid_result.estimators_[0].best_params_
        print('-'*20)
        print('best_params:')
        for i in range(len(grid_result.estimators_)):
            print(i, grid_result.estimators_[i].best_params_)
        
        model = grid_result
        
        pre_y = model.predict(test_x)
        print('-'*20)
        #计算决策系数r方
        r2 = performance_metric(test_y, pre_y)  
        print('test_r2 = ', r2)
    
    def performance_metric(y_true, y_predict):
        
        score = r2_score(y_true,y_predict)
        
        MSE=np.mean(( y_predict- y_true)**2)
        print('RMSE: ',MSE**0.5)
        MAE=np.mean(np.abs( y_predict- y_true))
        print('MAE: ',MAE)
        
        return score
        
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47

    保存调参后的模型,增加下面代码即可

    joblib.dump(model, './ml_data/xgb_%d_%d_%s.model' %(n_input, n_output, version)) 
    
    • 1

    调参:修改 param_grid 为自己的参数即可

    param_grid = {
    	"learning_rate": [0.05, 0.10, 0.15],
        "n_estimators":[400, 500, 600, 700],
        "max_depth": [ 3, 5, 7],
        "min_child_weight": [ 1, 3, 5, 7],
        "gamma":[ 0.0, 0.1, 0.2],
        "colsample_bytree":[0.7, 0.8, 0.9],
        "subsample":[0.7, 0.8, 0.9],
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    调参线程数量 n_jobs=4 ,可根据自己的机器设定,设置 n_jobs=-1 容易报错

    GPU参数设置:

    estimator=xgb.XGBRegressor(seed=42,
    	                             tree_method='gpu_hist',
    	                             gpu_id=3)
    
    • 1
    • 2
    • 3

    tree_method=‘gpu_hist’ 表示使用GPU训练模型,
    gpu_id=3 表示设置第3块GPU,

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/525468
推荐阅读
相关标签
  

闽ICP备14008679号