赞
踩
XGBoost (Extreme Gradient Boosting)是一种基于梯度提升决策树的机器学习算法。它是一种高效、灵活和可扩展的技术,而且在许多机器学习竞赛中都表现出色。该算法的主要思想是通过构建多个决策树模型来逐步改进预测结果,每一次迭代都会针对之前模型预测错误的部分进行改进。XGBoost 使用正则化技术避免过拟合,并支持并行计算加速训练过程。此外,它还提供了特征重要性评估、缺失值处理、交叉验证等功能。
首先,需要安装xgboost和DEAP库,由于pip安装较慢,可以在命令行中输入如下指令进行快速安装,该部分可以参考大神的博文Python:pip 安装第三方库速度很慢的解决办法,以及离线安装方法
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple +安装包
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple xgboost
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple deap
NC文件是一种常用的气象、海洋、地球物理等科学领域中使用的数据格式,全称为NetCDF(Network Common Data Form)。它是一种自描述的二进制格式,可以存储多维数组和其相关的元数据信息,以及对时间、空间和变量的描述。NC文件支持跨平台读写,并且存储数据的方式很高效,因此被广泛用于科学计算、数据交换和共享等方面。同时NC文件也有许多工具和库来支持读写和处理数据,例如NetCDF C库、NetCDF-Java库、Python的netCDF4库等等。
import netCDF4 as nc
import pandas as pd
import numpy as np
#读取nc文件
file = r'E:\happy_heaven\fish\pythonProject\Test.nc'
dataset =nc.Dataset(file)
all_vars=dataset.variables.keys()
# print(len(all_vars))
#latitude
#获取所有变量信息
all_vars_info = dataset.variables.items()
#获取nc文件中latitude的数据
latitude=dataset.variables['latitude'][:]
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
import xgboost as xgb
from sklearn import metrics
import netCDF4 as nc
#将数据进行整合,data是特征值,target是分类目标值
data = np.hstack((latitudeArray,longitudeArray,tbb_09,tbb_10,tbb_11,tbb_12,tbb_13,tbb_14,tbb_15,tbb_16))
target = rain_flag
print(data.shape)
print(target.shape)
# 将数据拆分为训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.2, random_state=42)
print(X_train.shape)
# 创建一个XGB分类器,并设置参数
xgb_clf = xgb.XGBClassifier(
learning_rate=0.1,
max_depth=5,
n_estimators=100,
objective='multi:softmax',
num_class=len(set(target)),
random_state=42
)
# 训练模型
xgb_clf.fit(X_train, y_train)
# 在测试集上进行预测
y_pred = xgb_clf.predict(X_test)
# 评估模型性能
accuracy = metrics.accuracy_score(y_test, y_pred)
precision = metrics.precision_score(y_test, y_pred, average='weighted')
recall = metrics.recall_score(y_test, y_pred, average='weighted')
f1_score = metrics.f1_score(y_test, y_pred, average='weighted')
print("Accuracy:", accuracy)
print("Precision:", precision)
print("Recall:", recall)
print("F1 score:", f1_score)
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
import xgboost as xgb
from sklearn import metrics
import netCDF4 as nc
import netCDF4 as nc
import pandas as pd
import numpy as np
#读取nc文件
file = r'E:\happy_heaven\fish\pythonProject\Test.nc'
dataset =nc.Dataset(file)
all_vars=dataset.variables.keys()
# print(len(all_vars))
#latitude longitude rain_flag tbb_09
#获取所有变量信息
all_vars_info = dataset.variables.items()
latitude=dataset.variables['latitude'][:]
longitude = dataset.variables['longitude'][:]
latitudeArray=np.zeros((len(latitude),int(1)))
for i in range(len(latitude)):
latitudeArray[i,0]=latitude[i]
longitudeArray=np.zeros((len(longitude),int(1)))
for i in range(len(longitude)):
longitudeArray[i,0]=longitude[i]
rain_flag=dataset.variables['rain_flag'][:,0]
tbb_09=dataset.variables['tbb_09'][:,:]
data = np.hstack((latitudeArray,longitudeArray,tbb_09))
target = rain_flag
print(data.shape)
print(target.shape)
# 将数据拆分为训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.2, random_state=42)
print(X_train.shape)
# 创建一个XGB分类器,并设置参数
xgb_clf = xgb.XGBClassifier(
learning_rate=0.1,
max_depth=5,
n_estimators=100,
objective='multi:softmax',
num_class=len(set(target)),
random_state=42
)
# 训练模型
xgb_clf.fit(X_train, y_train)
# 在测试集上进行预测
y_pred = xgb_clf.predict(X_test)
# 评估模型性能
accuracy = metrics.accuracy_score(y_test, y_pred)
precision = metrics.precision_score(y_test, y_pred, average='weighted')
recall = metrics.recall_score(y_test, y_pred, average='weighted')
f1_score = metrics.f1_score(y_test, y_pred, average='weighted')
print("Accuracy:", accuracy)
print("Precision:", precision)
print("Recall:", recall)
print("F1 score:", f1_score)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。