当前位置:   article > 正文

XGBoost:多分类问题_dermatology数据集

dermatology数据集

下面用数据 UCI Dermatology dataset演示XGBoost的多分类问题

首先要安装好XGBoost的C++版本和相应的Python模块,然后执行如下脚本,如果本地没有训练所需要的数据,runexp.sh负责从https://archive.ics.uci.edu/ml/datasets/Dermatology下载数据集,然后调用train.py

1.Run runexp.sh

./runexp.sh

runexp.sh的代码

  1. #!/bin/bash
  2. if [ -f dermatology.data ]
  3. then
  4. echo "use existing data to run multi class classification"
  5. else
  6. echo "getting data from uci, make sure you are connected to internet"
  7. wget https://archive.ics.uci.edu/ml/machine-learning-databases/dermatology/dermatology.data
  8. fi
  9. python train.py

train.py的代码

  1. #! /usr/bin/python
  2. import numpy as np
  3. import xgboost as xgb
  4. # label need to be 0 to num_class -1
  5. data = np.loadtxt('./dermatology.data', delimiter=',',converters={33: lambda x:int(x == '?'), 34: lambda x:int(x)-1 } )
  6. sz = data.shape
  7. train = data[:int(sz[0] * 0.7), :]
  8. test = data[int(sz[0] * 0.7):, :]
  9. train_X = train[:,0:33]
  10. train_Y = train[:, 34]
  11. test_X = test[:,0:33]
  12. test_Y = test[:, 34]
  13. xg_train = xgb.DMatrix( train_X, label=train_Y)
  14. xg_test = xgb.DMatrix(test_X, label=test_Y)
  15. # setup parameters for xgboost
  16. param = {}
  17. # use softmax multi-class classification
  18. param['objective'] = 'multi:softmax'
  19. # scale weight of positive examples
  20. param['eta'] = 0.1
  21. param['max_depth'] = 6
  22. param['silent'] = 1
  23. param['nthread'] = 4
  24. param['num_class'] = 6
  25. watchlist = [ (xg_train,'train'), (xg_test, 'test') ]
  26. num_round = 5
  27. bst = xgb.train(param, xg_train, num_round, watchlist );
  28. # get prediction
  29. pred = bst.predict( xg_test );
  30. print ('predicting, classification error=%f' % (sum( int(pred[i]) != test_Y[i] for i in range(len(test_Y))) / float(len(test_Y)) ))
  31. # do the same thing again, but output probabilities
  32. param['objective'] = 'multi:softprob'
  33. bst = xgb.train(param, xg_train, num_round, watchlist );
  34. # Note: this convention has been changed since xgboost-unity
  35. # get prediction, this is in 1D array, need reshape to (ndata, nclass)
  36. yprob = bst.predict( xg_test ).reshape( test_Y.shape[0], 6 )
  37. ylabel = np.argmax(yprob, axis=1)
  38. print ('predicting, classification error=%f' % (sum( int(ylabel[i]) != test_Y[i] for i in range(len(test_Y))) / float(len(test_Y)) ))
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/AllinToyou/article/detail/97481
推荐阅读
相关标签
  

闽ICP备14008679号