当前位置:   article > 正文

lightgbm的安装(微软提供)与注意事项_exception: please install cmake and all required d

exception: please install cmake and all required dependencies first

lightgbm

github

doc

这里文档的安装指南是CLI版本,不需要。具体安装link进这个:

python-package

中文文档

概述

安装

install:

  • Exception: Please install CMake and all required dependencies first

    • 安装页面

      • 有三条依赖环境:

        1. Build from Sources section

          1. 对应不同系统的要求都有
        2. For Windows users, CMake (version 3.8 or higher) is strongly required.
          
          • 1
        3. Boost and OpenCL are needed:...
          
          • 1
          1. Installation Guide【安装OpenCL、libboost、CMake】
    • 安装lib过程

      #下列【新】软件包将被安装:
      # nvidia-opencl-dev ocl-icd-opencl-dev
      sudo apt install nvidia-opencl-dev
      sudo apt install ocl-icd-libopencl1 ocl-icd-opencl-dev
      sudo apt install libboost-dev libboost-system-dev libboost-filesystem-dev
      # in conda env 
      conda install cmake
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
  • install

    pip install lightgbm --install-option=--gpu
    
    • 1

gpu test:

https://github.com/microsoft/LightGBM/issues/3939

测试代码一:

import lightgbm
import numpy as np


def check_gpu_support():
    data = np.random.rand(50, 2)
    label = np.random.randint(2, size=50)
    print(label)
    train_data = lightgbm.Dataset(data, label=label)
    params = {'num_iterations': 1, 'device': 'gpu'}
    try:
        gbm = lightgbm.train(params, train_set=train_data)
        print("GPU True !!!")
    except Exception as e:
        print("GPU False !!!")


if __name__ == '__main__':
    check_gpu_support()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

测试代码二:

import lightgbm as lgb
import time
import numpy as np

# params = {'max_bin': 63,
#           'num_leaves': 255,
#           'learning_rate': 0.1,
#           'tree_learner': 'serial',
#           'task': 'train',
#           'is_training_metric': 'false',
#           'min_data_in_leaf': 1,
#           'min_sum_hessian_in_leaf': 100,
#           'ndcg_eval_at': [1, 3, 5, 10],
#           'sparse_threshold': 1.0,
#           'device': 'gpu',
#           'gpu_platform_id': 0,
#           'gpu_device_id': 0}
#
dtrain = lgb.Dataset(data=np.array([[2, 23, 34, 54, 1], [21, 23, 4, 64, 1], [27, 53, 3, 4, 0]]))
# t0 = time.time()
# gbm = lgb.train(params, train_set=dtrain, num_boost_round=10,
#                 valid_sets=None, valid_names=None,
#                 fobj=None, feval=None, init_model=None,
#                 feature_name='auto', categorical_feature='auto',
#                 early_stopping_rounds=None, evals_result=None,
#                 verbose_eval=True,
#                 keep_training_booster=False, callbacks=None)
# t1 = time.time()
#
# print('gpu version elapse time: {}'.format(t1 - t0))

params = {'max_bin': 63,
          'num_leaves': 255,
          'learning_rate': 0.1,
          'tree_learner': 'serial',
          'task': 'train',
          'is_training_metric': 'false',
          'min_data_in_leaf': 1,
          'min_sum_hessian_in_leaf': 100,
          'ndcg_eval_at': [1, 3, 5, 10],
          'sparse_threshold': 1.0,
          'device': 'cpu'
          }

t0 = time.time()
gbm = lgb.train(params, train_set=dtrain, num_boost_round=10,
                valid_sets=None, valid_names=None,
                fobj=None, feval=None, init_model=None,
                feature_name='auto', categorical_feature='auto',
                early_stopping_rounds=None, evals_result=None,
                verbose_eval=True,
                keep_training_booster=False, callbacks=None)
t1 = time.time()

print('cpu version elapse time: {}'.format(t1 - t0))
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55

关于gpu版本和cuda版本

github-issue_gpu文档不清楚

cuda版本:使用device_type="cuda"代替device_type="gpu"

最新回答:

CUDA version is a re-written in CUDA language GPU version for systems where OpenCL is not available.

总结:

gpu版本:我用的python的版本,所以是python语言

cuda版本:用的cuda语言写的

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/你好赵伟/article/detail/801526
推荐阅读
相关标签
  

闽ICP备14008679号