当前位置:   article > 正文

pyspark训练模型demo(一)_算法模型训练demo

算法模型训练demo

pyspark训练模型demo

第一章:分类模型



一、逻辑回归模型

第一步:通过pandas、createDataFrame创造模型原始数据:

# spark version 3.0.1
from pyspark.ml.classification import LogisticRegression
import pandas as pd

# 模型数据
pandas_df = pd.DataFrame({
    'a': [1,1,0,1,0],
    'b': [1,0,1,1,1],
    'c': [0,1,0,0,0],
    'y': [0,0,0,1,1],
    'id':['A001', 'A002', 'A003','A004','A005']
})

df = spark.createDataFrame(pandas_df).select("id","a","b","c","y")
df.show()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
+----+---+---+---+---+
|  id|  a|  b|  c|  y|
+----+---+---+---+---+
|A001|  1|  1|  0|  0|
|A002|  1|  0|  1|  0|
|A003|  0|  1|  0|  0|
|A004|  1|  1|  0|  1|
|A005|  0|  1|  0|  1|
+----+---+---+---+---+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

第二步:features向量化、标准化处理

from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import Normalizer

# 生成features
vecAss = VectorAssembler(inputCols=['a','b','c'], outputCol='features')
df_features = vecAss.transform(df)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
+----+---+---+---+---+-------------+
|  id|  a|  b|  c|  y|     features|
+----+---+---+---+---+-------------+
|A001|  1|  1|  0|  0|[1.0,1.0,0.0]|
|A002|  1|  0|  1|  0|[1.0,0.0,1.0]|
|A003|  0|  1|  0|  0|[0.0,1.0,0.0]|
|A004|  1|  1|  0|  1|[1.0,1.0,0.0]|
|A005|  0|  1|  0|  1|[0.0,1.0,0.0]|
+----+---+---+---+---+-------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

标准化features

Norm = Normalizer(inputCol="features", outputCol="normFeatures", p=1.0)
df_norm_features = Norm.transform(df_features)
df_norm_features.show()
  • 1
  • 2
  • 3
+----+---+---+---+---+-------------+-------------+
|  id|  a|  b|  c|  y|     features| normFeatures|
+----+---+---+---+---+-------------+-------------+
|A001|  1|  1|  0|  0|[1.0,1.0,0.0]|[0.5,0.5,0.0]|
|A002|  1|  0|  1|  0|[1.0,0.0,1.0]|[0.5,0.0,0.5]|
|A003|  0|  1|  0|  0|[0.0,1.0,0.0]|[0.0,1.0,0.0]|
|A004|  1|  1|  0|  1|[1.0,1.0,0.0]|[0.5,0.5,0.0]|
|A005|  0|  1|  0|  1|[0.0,1.0,0.0]|[0.0,1.0,0.0]|
+----+---+---+---+---+-------------+-------------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

第三步:模型训练

# 模型训练
model = LogisticRegression(featuresCol='normFeatures', labelCol='y',maxIter=100,tol=1e-06,threshold=0.5,predictionCol='prediction',
	probabilityCol='probability', rawPredictionCol='rawPrediction',standardization=True).fit(df_norm_features)
print(model.coefficients)
[1.8029996152867545,1.803003434834563,-36.96577573215852]
print(model.intercept)
-1.80300332247
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

第四步:模型预测

# 模型预测
result = model.transform(df_norm_features)
result.show()
  • 1
  • 2
  • 3

+----+---+---+---+---+-------------+-------------+--------------------+--------------------+----------+
|  id|  a|  b|  c|  y|     features| normFeatures|       rawPrediction|         probability|prediction|
+----+---+---+---+---+-------------+-------------+--------------------+--------------------+----------+
|A001|  1|  1|  0|  0|[1.0,1.0,0.0]|[0.5,0.5,0.0]|[1.79741316608250...|[0.50000044935329...|       0.0|
|A002|  1|  0|  1|  0|[1.0,0.0,1.0]|[0.5,0.0,0.5]|[19.3843913809097...|[0.99999999618525...|       0.0|
|A003|  0|  1|  0|  0|[0.0,1.0,0.0]|[0.0,1.0,0.0]|[-1.1236073826914...|[0.49999997190981...|       1.0|
|A004|  1|  1|  0|  1|[1.0,1.0,0.0]|[0.5,0.5,0.0]|[1.79741316608250...|[0.50000044935329...|       0.0|
|A005|  0|  1|  0|  1|[0.0,1.0,0.0]|[0.0,1.0,0.0]|[-1.1236073826914...|[0.49999997190981...|       1.0|
+----+---+---+---+---+-------------+-------------+--------------------+--------------------+----------+
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
result.printSchema()
  • 1

第五步:

[1] https://spark.apache.org/docs/3.0.0/api/python/pyspark.ml.html#pyspark.ml.Model

声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号