当前位置:   article > 正文

Python-JuypterCPU 机器学习代码

Python-JuypterCPU 机器学习代码

 

        Python Jupyter 是一个开源的交互式笔记本工具,为数据分析、数据可视化和机器学习建模提供了便捷的环境。Jupyter 笔记本使用 web 应用程序呈现,能够在浏览器中直接运行,并支持多种编程语言,最常用的是 Python。

在 Jupyter 笔记本中,用户可以编写代码、执行代码、展示数据可视化、记录思考过程,并与他人分享这些信息。Jupyter 的灵活性使得它成为数据科学家和机器学习工程师的首选工具之一。

以下是一些 Python Jupyter 笔记本的常见用途:

  1. 数据分析:使用 pandas、numpy 和其他数据处理库对数据进行清洗、转换和分析。
  2. 数据可视化:利用 matplotlib、seaborn 或 Plotly 等库创建图表和可视化数据。
  3. 机器学习建模:应用 scikit-learn、TensorFlow、PyTorch 等库进行机器学习和深度学习建模。
  4. 实验性编程:通过交互式编程快速测试想法和算法,加快开发迭代过程。

Python Jupyter 笔记本的优点包括:

  • 交互式编程:可以即时运行代码和查看结果,方便调试和实验。
  • 数据可视化:支持丰富的数据可视化功能,有助于更好地理解数据。
  • 方便共享:可以将笔记本文件保存为独立的文档,轻松分享给他人。
  • 多语言支持:虽然以 Python 为主,但也支持其他语言(如 R、Julia)。

titanic.csv数据模板

  1. import time
  2. time_start=time.time()
  3. import numpy as np # linear algebra
  4. import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
  5. import seaborn as sns
  6. import matplotlib.pyplot as plt
  7. # Input data files are available in the read-only "../input/" directory
  8. # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
  9. import os
  10. data = pd.read_csv('/dataset/Titanic4/titanic.csv')
  11. from sklearn.model_selection import train_test_split
  12. training,test = train_test_split(data,test_size=0.3)
  13. training['train_test'] = 1
  14. test['train_test'] = 0
  15. test['Survived'] = np.NaN
  16. all_data = pd.concat([training,test])
  17. # Understand nature of the data .info() .describe()
  18. # Histograms and boxplots
  19. # Value counts
  20. # Missing data
  21. # Correlation between the metrics
  22. # Explore interesting themes
  23. # Wealthy survive?
  24. # By location
  25. # Age scatterplot with ticket price
  26. # Young and wealthy Variable?
  27. # Total spent?
  28. # Feature engineering
  29. # preprocess data together or use a transformer?
  30. # use label for train and test
  31. # Scaling?
  32. # Model Baseline
  33. # Model comparison with CV
  34. %matplotlib inline
  35. all_data.columns
  36. #quick look at our data types & null counts
  37. training.info()
  38. # to better understand the numeric data, we want to use the .describe() method. This gives us an understanding of the central tendencies of the data
  39. training.describe()
  40. #quick way to separate numeric columns
  41. training.describe().columns
  42. # look at numeric and categorical values separately
  43. df_num = training[['Age','SibSp','Parch','Fare']]
  44. df_cat = training[['Survived','Pclass','Sex','Ticket','Cabin','Embarked']]
  45. #distributions for all numeric variables
  46. for i in df_num.columns:
  47. plt.hist(df_num[i])
  48. plt.title(i)
  49. plt.show()
  50. print(df_num.corr())
  51. sns.heatmap(df_num.corr())
  52. # compare survival rate across Age, SibSp, Parch, and Fare
  53. pd.pivot_table(training, index = 'Survived', values = ['Age','SibSp','Parch','Fare'])
  54. for i in df_cat.columns:
  55. sns.barplot(df_cat[i].value_counts().index,df_cat[i].value_counts()).set_title(i)
  56. plt.show()\
  57. # Comparing survival and each of these categorical variables
  58. print(pd.pivot_table(training, index = 'Survived', columns = 'Pclass', values = 'Ticket' ,aggfunc ='count'))
  59. print()
  60. print(pd.pivot_table(training, index = 'Survived', columns = 'Sex', values = 'Ticket' ,aggfunc ='count'))
  61. print()
  62. print(pd.pivot_table(training, index = 'Survived', columns = 'Embarked', values = 'Ticket' ,aggfunc ='count'))
  63. df_cat.Cabin
  64. training['cabin_multiple'] = training.Cabin.apply(lambda x: 0 if pd.isna(x) else len(x.split(' ')))
  65. # after looking at this, we may want to look at cabin by letter or by number. Let's create some categories for this
  66. # letters
  67. # multiple letters
  68. training['cabin_multiple'].value_counts()
  69. pd.pivot_table(training, index = 'Survived', columns = 'cabin_multiple', values = 'Ticket' ,aggfunc ='count')
  70. #creates categories based on the cabin letter (n stands for null)
  71. #in this case we will treat null values like it's own category
  72. training['cabin_adv'] = training.Cabin.apply(lambda x: str(x)[0])
  73. #comparing surivial rate by cabin
  74. print(training.cabin_adv.value_counts())
  75. pd.pivot_table(training,index='Survived',columns='cabin_adv', values = 'Name', aggfunc='count')
  76. #understand ticket values better
  77. #numeric vs non numeric
  78. training['numeric_ticket'] = training.Ticket.apply(lambda x: 1 if x.isnumeric() else 0)
  79. training['ticket_letters'] = training.Ticket.apply(lambda x: ''.join(x.split(' ')[:-1]).replace('.','').replace('/','').lower() if len(x.split(' ')[:-1]) >0 else 0)
  80. training['numeric_ticket'].value_counts()
  81. #lets us view all rows in dataframe through scrolling. This is for convenience
  82. pd.set_option("max_rows", None)
  83. training['ticket_letters'].value_counts()
  84. #difference in numeric vs non-numeric tickets in survival rate
  85. pd.pivot_table(training,index='Survived',columns='numeric_ticket', values = 'Ticket', aggfunc='count')
  86. #survival rate across different tyicket types
  87. pd.pivot_table(training,index='Survived',columns='ticket_letters', values = 'Ticket', aggfunc='count')
  88. #feature engineering on person's title
  89. training.Name.head(50)
  90. training['name_title'] = training.Name.apply(lambda x: x.split(',')[1].split('.')[0].strip())
  91. #mr., ms., master. etc
  92. training['name_title'].value_counts()
  93. #create all categorical variables that we did above for both training and test sets
  94. all_data['cabin_multiple'] = all_data.Cabin.apply(lambda x: 0 if pd.isna(x) else len(x.split(' ')))
  95. all_data['cabin_adv'] = all_data.Cabin.apply(lambda x: str(x)[0])
  96. all_data['numeric_ticket'] = all_data.Ticket.apply(lambda x: 1 if x.isnumeric() else 0)
  97. all_data['ticket_letters'] = all_data.Ticket.apply(lambda x: ''.join(x.split(' ')[:-1]).replace('.','').replace('/','').lower() if len(x.split(' ')[:-1]) >0 else 0)
  98. all_data['name_title'] = all_data.Name.apply(lambda x: x.split(',')[1].split('.')[0].strip())
  99. #impute nulls for continuous data
  100. #all_data.Age = all_data.Age.fillna(training.Age.mean())
  101. all_data.Age = all_data.Age.fillna(training.Age.median())
  102. #all_data.Fare = all_data.Fare.fillna(training.Fare.mean())
  103. all_data.Fare = all_data.Fare.fillna(training.Fare.median())
  104. #drop null 'embarked' rows. Only 2 instances of this in training and 0 in test
  105. all_data.dropna(subset=['Embarked'],inplace = True)
  106. #tried log norm of sibsp (not used)
  107. all_data['norm_sibsp'] = np.log(all_data.SibSp+1)
  108. all_data['norm_sibsp'].hist()
  109. # log norm of fare (used)
  110. all_data['norm_fare'] = np.log(all_data.Fare+1)
  111. all_data['norm_fare'].hist()
  112. # converted fare to category for pd.get_dummies()
  113. all_data.Pclass = all_data.Pclass.astype(str)
  114. #created dummy variables from categories (also can use OneHotEncoder)
  115. all_dummies = pd.get_dummies(all_data[['Pclass','Sex','Age','SibSp','Parch','norm_fare','Embarked','cabin_adv','cabin_multiple','numeric_ticket','name_title','train_test']])
  116. #Split to train test again
  117. X_train = all_dummies[all_dummies.train_test == 1].drop(['train_test'], axis =1)
  118. X_test = all_dummies[all_dummies.train_test == 0].drop(['train_test'], axis =1)
  119. y_train = all_data[all_data.train_test==1].Survived
  120. y_train.shape
  121. # Scale data
  122. from sklearn.preprocessing import StandardScaler
  123. scale = StandardScaler()
  124. all_dummies_scaled = all_dummies.copy()
  125. all_dummies_scaled[['Age','SibSp','Parch','norm_fare']]= scale.fit_transform(all_dummies_scaled[['Age','SibSp','Parch','norm_fare']])
  126. all_dummies_scaled
  127. X_train_scaled = all_dummies_scaled[all_dummies_scaled.train_test == 1].drop(['train_test'], axis =1)
  128. X_test_scaled = all_dummies_scaled[all_dummies_scaled.train_test == 0].drop(['train_test'], axis =1)
  129. y_train = all_data[all_data.train_test==1].Survived
  130. from sklearn.model_selection import cross_val_score
  131. from sklearn.naive_bayes import GaussianNB
  132. from sklearn.linear_model import LogisticRegression
  133. from sklearn import tree
  134. from sklearn.neighbors import KNeighborsClassifier
  135. from sklearn.ensemble import RandomForestClassifier
  136. from sklearn.svm import SVC
  137. #I usually use Naive Bayes as a baseline for my classification tasks
  138. gnb = GaussianNB()
  139. cv = cross_val_score(gnb,X_train_scaled,y_train,cv=5)
  140. print(cv)
  141. print(cv.mean())
  142. lr = LogisticRegression(max_iter = 2000)
  143. cv = cross_val_score(lr,X_train,y_train,cv=5)
  144. print(cv)
  145. print(cv.mean())
  146. lr = LogisticRegression(max_iter = 2000)
  147. cv = cross_val_score(lr,X_train_scaled,y_train,cv=5)
  148. print(cv)
  149. print(cv.mean())
  150. dt = tree.DecisionTreeClassifier(random_state = 1)
  151. cv = cross_val_score(dt,X_train,y_train,cv=5)
  152. print(cv)
  153. print(cv.mean())
  154. dt = tree.DecisionTreeClassifier(random_state = 1)
  155. cv = cross_val_score(dt,X_train_scaled,y_train,cv=5)
  156. print(cv)
  157. print(cv.mean())
  158. knn = KNeighborsClassifier()
  159. cv = cross_val_score(knn,X_train,y_train,cv=5)
  160. print(cv)
  161. print(cv.mean())
  162. knn = KNeighborsClassifier()
  163. cv = cross_val_score(knn,X_train_scaled,y_train,cv=5)
  164. print(cv)
  165. print(cv.mean())
  166. rf = RandomForestClassifier(random_state = 1)
  167. cv = cross_val_score(rf,X_train,y_train,cv=5)
  168. print(cv)
  169. print(cv.mean())
  170. rf = RandomForestClassifier(random_state = 1)
  171. cv = cross_val_score(rf,X_train_scaled,y_train,cv=5)
  172. print(cv)
  173. print(cv.mean())
  174. svc = SVC(probability = True)
  175. cv = cross_val_score(svc,X_train_scaled,y_train,cv=5)
  176. print(cv)
  177. print(cv.mean())
  178. from xgboost import XGBClassifier
  179. xgb = XGBClassifier(random_state =1)
  180. cv = cross_val_score(xgb,X_train_scaled,y_train,cv=5)
  181. print(cv)
  182. print(cv.mean())
  183. #Voting classifier takes all of the inputs and averages the results. For a "hard" voting classifier each classifier gets 1 vote "yes" or "no" and the result is just a popular vote. For this, you generally want odd numbers
  184. #A "soft" classifier averages the confidence of each of the models. If a the average confidence is > 50% that it is a 1 it will be counted as such
  185. from sklearn.ensemble import VotingClassifier
  186. voting_clf = VotingClassifier(estimators = [('lr',lr),('knn',knn),('rf',rf),('gnb',gnb),('svc',svc),('xgb',xgb)], voting = 'soft')
  187. cv = cross_val_score(voting_clf,X_train_scaled,y_train,cv=5)
  188. print(cv)
  189. print(cv.mean())
  190. voting_clf.fit(X_train_scaled,y_train)
  191. y_hat_base_vc = voting_clf.predict(X_test_scaled).astype(int)
  192. basic_submission = {'PassengerId': X_test_scaled.index, 'Survived': y_hat_base_vc}
  193. base_submission = pd.DataFrame(data=basic_submission)
  194. base_submission.to_csv('base_submission.csv', index=False)
  195. from sklearn.model_selection import GridSearchCV
  196. from sklearn.model_selection import RandomizedSearchCV
  197. #simple performance reporting function
  198. def clf_performance(classifier, model_name):
  199. print(model_name)
  200. print('Best Score: ' + str(classifier.best_score_))
  201. print('Best Parameters: ' + str(classifier.best_params_))
  202. lr = LogisticRegression()
  203. param_grid = {'max_iter' : [2000],
  204. 'penalty' : ['l1', 'l2'],
  205. 'C' : np.logspace(-4, 4, 20),
  206. 'solver' : ['liblinear']}
  207. clf_lr = GridSearchCV(lr, param_grid = param_grid, cv = 5, verbose = 10, n_jobs = 2)
  208. best_clf_lr = clf_lr.fit(X_train_scaled,y_train)
  209. clf_performance(best_clf_lr,'Logistic Regression')
  210. knn = KNeighborsClassifier()
  211. param_grid = {'n_neighbors' : [3,5,7,9],
  212. 'weights' : ['uniform', 'distance'],
  213. 'algorithm' : ['auto', 'ball_tree','kd_tree'],
  214. 'p' : [1,2]}
  215. clf_knn = GridSearchCV(knn, param_grid = param_grid, cv = 5, verbose = 10, n_jobs = 2)
  216. best_clf_knn = clf_knn.fit(X_train_scaled,y_train)
  217. clf_performance(best_clf_knn,'KNN')
  218. svc = SVC(probability = True)
  219. param_grid = tuned_parameters = [{'kernel': ['rbf'], 'gamma': [.1,.5,1,2,5,10],
  220. 'C': [.1, 1, 10, 100, 1000]},
  221. {'kernel': ['linear'], 'C': [.1, 1, 10, 100, 1000]},
  222. {'kernel': ['poly'], 'degree' : [2,3,4,5], 'C': [.1, 1, 10, 100, 1000]}]
  223. clf_svc = GridSearchCV(svc, param_grid = param_grid, cv = 5, verbose = 10, n_jobs = 2)
  224. best_clf_svc = clf_svc.fit(X_train_scaled,y_train)
  225. clf_performance(best_clf_svc,'SVC')
  226. #Because the total feature space is so large, I used a randomized search to narrow down the paramters for the model. I took the best model from this and did a more granular search
  227. """
  228. rf = RandomForestClassifier(random_state = 1)
  229. param_grid = {'n_estimators': [100,500,1000],
  230. 'bootstrap': [True,False],
  231. 'max_depth': [3,5,10,20,50,75,100,None],
  232. 'max_features': ['auto','sqrt'],
  233. 'min_samples_leaf': [1,2,4,10],
  234. 'min_samples_split': [2,5,10]}
  235. clf_rf_rnd = RandomizedSearchCV(rf, param_distributions = param_grid, n_iter = 100, cv = 5, verbose = True, n_jobs = 2)
  236. best_clf_rf_rnd = clf_rf_rnd.fit(X_train_scaled,y_train)
  237. clf_performance(best_clf_rf_rnd,'Random Forest')"""
  238. rf = RandomForestClassifier(random_state = 1)
  239. param_grid = {'n_estimators': [400,450,500,550],
  240. 'criterion':['gini','entropy'],
  241. 'bootstrap': [True],
  242. 'max_depth': [15, 20, 25],
  243. 'max_features': ['auto','sqrt', 10],
  244. 'min_samples_leaf': [2,3],
  245. 'min_samples_split': [2,3]}
  246. clf_rf = GridSearchCV(rf, param_grid = param_grid, cv = 5, verbose = 10, n_jobs = 2)
  247. best_clf_rf = clf_rf.fit(X_train_scaled,y_train)
  248. clf_performance(best_clf_rf,'Random Forest')
  249. best_rf = best_clf_rf.best_estimator_.fit(X_train_scaled,y_train)
  250. feat_importances = pd.Series(best_rf.feature_importances_, index=X_train_scaled.columns)
  251. feat_importances.nlargest(20).plot(kind='barh')
  252. """xgb = XGBClassifier(random_state = 1)
  253. param_grid = {
  254. 'n_estimators': [20, 50, 100, 250, 500,1000],
  255. 'colsample_bytree': [0.2, 0.5, 0.7, 0.8, 1],
  256. 'max_depth': [2, 5, 10, 15, 20, 25, None],
  257. 'reg_alpha': [0, 0.5, 1],
  258. 'reg_lambda': [1, 1.5, 2],
  259. 'subsample': [0.5,0.6,0.7, 0.8, 0.9],
  260. 'learning_rate':[.01,0.1,0.2,0.3,0.5, 0.7, 0.9],
  261. 'gamma':[0,.01,.1,1,10,100],
  262. 'min_child_weight':[0,.01,0.1,1,10,100],
  263. 'sampling_method': ['uniform', 'gradient_based']
  264. }
  265. #clf_xgb = GridSearchCV(xgb, param_grid = param_grid, cv = 5, verbose = True, n_jobs = 2)
  266. #best_clf_xgb = clf_xgb.fit(X_train_scaled,y_train)
  267. #clf_performance(best_clf_xgb,'XGB')
  268. clf_xgb_rnd = RandomizedSearchCV(xgb, param_distributions = param_grid, n_iter = 1000, cv = 5, verbose = True, n_jobs = 2)
  269. best_clf_xgb_rnd = clf_xgb_rnd.fit(X_train_scaled,y_train)
  270. clf_performance(best_clf_xgb_rnd,'XGB')"""
  271. xgb = XGBClassifier(random_state = 1)
  272. param_grid = {
  273. 'n_estimators': [450,500,550],
  274. 'colsample_bytree': [0.75,0.8,0.85],
  275. 'max_depth': [None],
  276. 'reg_alpha': [1],
  277. 'reg_lambda': [2, 5, 10],
  278. 'subsample': [0.55, 0.6, .65],
  279. 'learning_rate':[0.5],
  280. 'gamma':[.5,1,2],
  281. 'min_child_weight':[0.01],
  282. 'sampling_method': ['uniform']
  283. }
  284. clf_xgb = GridSearchCV(xgb, param_grid = param_grid, cv = 5, verbose = 10, n_jobs = 2)
  285. best_clf_xgb = clf_xgb.fit(X_train_scaled,y_train)
  286. clf_performance(best_clf_xgb,'XGB')
  287. y_hat_xgb = best_clf_xgb.best_estimator_.predict(X_test_scaled).astype(int)
  288. xgb_submission = {'PassengerId': X_test_scaled.index, 'Survived': y_hat_xgb}
  289. submission_xgb = pd.DataFrame(data=xgb_submission)
  290. submission_xgb.to_csv('xgb_submission3.csv', index=False)
  291. best_lr = best_clf_lr.best_estimator_
  292. best_knn = best_clf_knn.best_estimator_
  293. best_svc = best_clf_svc.best_estimator_
  294. best_rf = best_clf_rf.best_estimator_
  295. best_xgb = best_clf_xgb.best_estimator_
  296. voting_clf_hard = VotingClassifier(estimators = [('knn',best_knn),('rf',best_rf),('svc',best_svc)], voting = 'hard')
  297. voting_clf_soft = VotingClassifier(estimators = [('knn',best_knn),('rf',best_rf),('svc',best_svc)], voting = 'soft')
  298. voting_clf_all = VotingClassifier(estimators = [('knn',best_knn),('rf',best_rf),('svc',best_svc), ('lr', best_lr)], voting = 'soft')
  299. voting_clf_xgb = VotingClassifier(estimators = [('knn',best_knn),('rf',best_rf),('svc',best_svc), ('xgb', best_xgb),('lr', best_lr)], voting = 'soft')
  300. print('voting_clf_hard :',cross_val_score(voting_clf_hard,X_train,y_train,cv=5))
  301. print('voting_clf_hard mean :',cross_val_score(voting_clf_hard,X_train,y_train,cv=5).mean())
  302. print('voting_clf_soft :',cross_val_score(voting_clf_soft,X_train,y_train,cv=5))
  303. print('voting_clf_soft mean :',cross_val_score(voting_clf_soft,X_train,y_train,cv=5).mean())
  304. print('voting_clf_all :',cross_val_score(voting_clf_all,X_train,y_train,cv=5))
  305. print('voting_clf_all mean :',cross_val_score(voting_clf_all,X_train,y_train,cv=5).mean())
  306. print('voting_clf_xgb :',cross_val_score(voting_clf_xgb,X_train,y_train,cv=5))
  307. print('voting_clf_xgb mean :',cross_val_score(voting_clf_xgb,X_train,y_train,cv=5).mean())
  308. #in a soft voting classifier you can weight some models more than others. I used a grid search to explore different weightings
  309. #no new results here
  310. params = {'weights' : [[1,1,1],[1,2,1],[1,1,2],[2,1,1],[2,2,1],[1,2,2],[2,1,2]]}
  311. vote_weight = GridSearchCV(voting_clf_soft, param_grid = params, cv = 5, verbose = 10, n_jobs = 2)
  312. best_clf_weight = vote_weight.fit(X_train_scaled,y_train)
  313. clf_performance(best_clf_weight,'VC Weights')
  314. voting_clf_sub = best_clf_weight.best_estimator_.predict(X_test_scaled)
  315. #Make Predictions
  316. voting_clf_hard.fit(X_train_scaled, y_train)
  317. voting_clf_soft.fit(X_train_scaled, y_train)
  318. voting_clf_all.fit(X_train_scaled, y_train)
  319. voting_clf_xgb.fit(X_train_scaled, y_train)
  320. best_rf.fit(X_train_scaled, y_train)
  321. y_hat_vc_hard = voting_clf_hard.predict(X_test_scaled).astype(int)
  322. y_hat_rf = best_rf.predict(X_test_scaled).astype(int)
  323. y_hat_vc_soft = voting_clf_soft.predict(X_test_scaled).astype(int)
  324. y_hat_vc_all = voting_clf_all.predict(X_test_scaled).astype(int)
  325. y_hat_vc_xgb = voting_clf_xgb.predict(X_test_scaled).astype(int)
  326. #convert output to dataframe
  327. final_data = {'PassengerId': X_test_scaled.index, 'Survived': y_hat_rf}
  328. submission = pd.DataFrame(data=final_data)
  329. final_data_2 = {'PassengerId': X_test_scaled.index, 'Survived': y_hat_vc_hard}
  330. submission_2 = pd.DataFrame(data=final_data_2)
  331. final_data_3 = {'PassengerId': X_test_scaled.index, 'Survived': y_hat_vc_soft}
  332. submission_3 = pd.DataFrame(data=final_data_3)
  333. final_data_4 = {'PassengerId': X_test_scaled.index, 'Survived': y_hat_vc_all}
  334. submission_4 = pd.DataFrame(data=final_data_4)
  335. final_data_5 = {'PassengerId': X_test_scaled.index, 'Survived': y_hat_vc_xgb}
  336. submission_5 = pd.DataFrame(data=final_data_5)
  337. final_data_comp = {'PassengerId': X_test_scaled.index, 'Survived_vc_hard': y_hat_vc_hard, 'Survived_rf': y_hat_rf, 'Survived_vc_soft' : y_hat_vc_soft, 'Survived_vc_all' : y_hat_vc_all, 'Survived_vc_xgb' : y_hat_vc_xgb}
  338. comparison = pd.DataFrame(data=final_data_comp)
  339. #track differences between outputs
  340. comparison['difference_rf_vc_hard'] = comparison.apply(lambda x: 1 if x.Survived_vc_hard != x.Survived_rf else 0, axis =1)
  341. comparison['difference_soft_hard'] = comparison.apply(lambda x: 1 if x.Survived_vc_hard != x.Survived_vc_soft else 0, axis =1)
  342. comparison['difference_hard_all'] = comparison.apply(lambda x: 1 if x.Survived_vc_all != x.Survived_vc_hard else 0, axis =1)
  343. comparison.difference_hard_all.value_counts()
  344. time_end=time.time()
  345. print('time cost',time_end-time_start,'s')

        总的来说,Python Jupyter 笔记本是数据科学和机器学习领域中非常强大的工具,能够帮助用户更高效地进行数据分析和模型建立。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/173318
推荐阅读
相关标签
  

闽ICP备14008679号