当前位置:   article > 正文

两种模型选择和超参数调整方法及Spark MLlib使用示例(Scala/Java/Python)_mllib 超参调整

mllib 超参调整

机器学习调试:模型选择和超参数调整

模型选择(又名超参数调整)

       在机器学习中非常重要的任务就是模型选择,或者使用数据来找到具体问题的最佳的模型和参数,这个过程也叫做调试。调试可以在独立的如逻辑回归等估计器中完成,也可以在包含多样算法、特征工程和其他步骤的管线中完成。用户应该一次性调试整个管线,而不是独立的调整管线中的每个组成部分。

MLlib支持交叉验证和训练验证分裂两个模型选择工具。使用这两个工具要求包含如下对象:

1.估计器:待调试的算法或管线。

2.一系列参数表:可选参数,也叫做“参数网格”。

3.评估器:评估模型拟合程度的准则或方法。

模型选择工具工作原理如下:

1.将输入数据划分为训练数据和测试数据。

2.对每组训练数据与测试数据对,对参数表集合,用相应参数来拟合估计器,得到训练后的模型,再使用评估器来评估模型表现。

3.选择性能表现最优模型对应参数表。

其中,对于回归问题评估器可选择RegressionEvaluator,二值数据可选择BinaryClassificationEvaluator,多分类问题可选择MulticlassClassificationEvaluator。评估器里默认的评估准则可通过setMetricName方法重写。

用户可通过ParamGridBuilder构建参数网格。

交叉验证

交叉验证将数据集划分为若干子集分别地进行训练和测试。如当k=3时,交叉验证产生3个训练数据与测试数据对,每个数据对使用2/3的数据来训练,1/3的数据来测试。对于一组特定的参数表,交叉验证计算基于三组不同训练数据与测试数据对训练得到的模型的评估准则的平均值。确定最佳参数表后,交叉验证最后使用最佳参数表基于全部数据来重新拟合估计器。

示例:

    注意对参数网格进行交叉验证的成本是很高的。如下面例子中,参数网格hashingTF.numFeatures3个值,lr.regParam2个值,CrossValidator使用2折交叉验证。这样就会产生(3*2)*2=12中不同的模型需要进行训练。在实际的设置中,通常有更多的参数需要设置,且我们可能会使用更多的交叉验证折数(3折或者10折都是经使用的)。所以交叉验证的成本是很高的,尽管如此,比起启发式的手工验证,交叉验证仍然是目前存在的参数选择方法中非常有用的一种。

Scala:

import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.classification.LogisticRegression
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.feature.{HashingTF, Tokenizer}
import org.apache.spark.ml.linalg.Vector
import org.apache.spark.ml.tuning.{CrossValidator, ParamGridBuilder}
import org.apache.spark.sql.Row

// Prepare training data from a list of (id, text, label) tuples.
val training = spark.createDataFrame(Seq(
  (0L, "a b c d e spark", 1.0),
  (1L, "b d", 0.0),
  (2L, "spark f g h", 1.0),
  (3L, "hadoop mapreduce", 0.0),
  (4L, "b spark who", 1.0),
  (5L, "g d a y", 0.0),
  (6L, "spark fly", 1.0),
  (7L, "was mapreduce", 0.0),
  (8L, "e spark program", 1.0),
  (9L, "a e c l", 0.0),
  (10L, "spark compile", 1.0),
  (11L, "hadoop software", 0.0)
)).toDF("id", "text", "label")

// Configure an ML pipeline, which consists of three stages: tokenizer, hashingTF, and lr.
val tokenizer = new Tokenizer()
  .setInputCol("text")
  .setOutputCol("words")
val hashingTF = new HashingTF()
  .setInputCol(tokenizer.getOutputCol)
  .setOutputCol("features")
val lr = new LogisticRegression()
  .setMaxIter(10)
val pipeline = new Pipeline()
  .setStages(Array(tokenizer, hashingTF, lr))

// We use a ParamGridBuilder to construct a grid of parameters to search over.
// With 3 values for hashingTF.numFeatures and 2 values for lr.regParam,
// this grid will have 3 x 2 = 6 parameter settings for CrossValidator to choose from.
val paramGrid = new ParamGridBuilder()
  .addGrid(hashingTF.numFeatures, Array(10, 100, 1000))
  .addGrid(lr.regParam, Array(0.1, 0.01))
  .build()

// We now treat the Pipeline as an Estimator, wrapping it in a CrossValidator instance.
// This will allow us to jointly choose parameters for all Pipeline stages.
// A CrossValidator requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
// Note that the evaluator here is a BinaryClassificationEvaluator and its default metric
// is areaUnderROC.
val cv = new CrossValidator()
  .setEstimator(pipeline)
  .setEvaluator(new BinaryClassificationEvaluator)
  .setEstimatorParamMaps(paramGrid)
  .setNumFolds(2)  // Use 3+ in practice

// Run cross-validation, and choose the best set of parameters.
val cvModel = cv.fit(training)

// Prepare test documents, which are unlabeled (id, text) tuples.
val test = spark.createDataFrame(Seq(
  (4L, "spark i j k"),
  (5L, "l m n"),
  (6L, "mapreduce spark"),
  (7L, "apache hadoop")
)).toDF("id", "text")

// Make predictions on test documents. cvModel uses the best model found (lrModel).
cvModel.transform(test)
  .select("id", "text", "probability", "prediction")
  .collect()
  .foreach { case Row(id: Long, text: String, prob: Vector, prediction: Double) =>
    println(s"($id, $text) --> prob=$prob, prediction=$prediction")
  }
Java:

  1. import java.util.Arrays;
  2. import org.apache.spark.ml.Pipeline;
  3. import org.apache.spark.ml.PipelineStage;
  4. import org.apache.spark.ml.classification.LogisticRegression;
  5. import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator;
  6. import org.apache.spark.ml.feature.HashingTF;
  7. import org.apache.spark.ml.feature.Tokenizer;
  8. import org.apache.spark.ml.param.ParamMap;
  9. import org.apache.spark.ml.tuning.CrossValidator;
  10. import org.apache.spark.ml.tuning.CrossValidatorModel;
  11. import org.apache.spark.ml.tuning.ParamGridBuilder;
  12. import org.apache.spark.sql.Dataset;
  13. import org.apache.spark.sql.Row;
  14. // Prepare training documents, which are labeled.
  15. Dataset<Row> training = spark.createDataFrame(Arrays.asList(
  16. new JavaLabeledDocument(0L, "a b c d e spark", 1.0),
  17. new JavaLabeledDocument(1L, "b d", 0.0),
  18. new JavaLabeledDocument(2L,"spark f g h", 1.0),
  19. new JavaLabeledDocument(3L, "hadoop mapreduce", 0.0),
  20. new JavaLabeledDocument(4L, "b spark who", 1.0),
  21. new JavaLabeledDocument(5L, "g d a y", 0.0),
  22. new JavaLabeledDocument(6L, "spark fly", 1.0),
  23. new JavaLabeledDocument(7L, "was mapreduce", 0.0),
  24. new JavaLabeledDocument(8L, "e spark program", 1.0),
  25. new JavaLabeledDocument(9L, "a e c l", 0.0),
  26. new JavaLabeledDocument(10L, "spark compile", 1.0),
  27. new JavaLabeledDocument(11L, "hadoop software", 0.0)
  28. ), JavaLabeledDocument.class);
  29. // Configure an ML pipeline, which consists of three stages: tokenizer, hashingTF, and lr.
  30. Tokenizer tokenizer = new Tokenizer()
  31. .setInputCol("text")
  32. .setOutputCol("words");
  33. HashingTF hashingTF = new HashingTF()
  34. .setNumFeatures(1000)
  35. .setInputCol(tokenizer.getOutputCol())
  36. .setOutputCol("features");
  37. LogisticRegression lr = new LogisticRegression()
  38. .setMaxIter(10)
  39. .setRegParam(0.01);
  40. Pipeline pipeline = new Pipeline()
  41. .setStages(new PipelineStage[] {tokenizer, hashingTF, lr});
  42. // We use a ParamGridBuilder to construct a grid of parameters to search over.
  43. // With 3 values for hashingTF.numFeatures and 2 values for lr.regParam,
  44. // this grid will have 3 x 2 = 6 parameter settings for CrossValidator to choose from.
  45. ParamMap[] paramGrid = new ParamGridBuilder()
  46. .addGrid(hashingTF.numFeatures(), new int[] {10, 100, 1000})
  47. .addGrid(lr.regParam(), new double[] {0.1, 0.01})
  48. .build();
  49. // We now treat the Pipeline as an Estimator, wrapping it in a CrossValidator instance.
  50. // This will allow us to jointly choose parameters for all Pipeline stages.
  51. // A CrossValidator requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
  52. // Note that the evaluator here is a BinaryClassificationEvaluator and its default metric
  53. // is areaUnderROC.
  54. CrossValidator cv = new CrossValidator()
  55. .setEstimator(pipeline)
  56. .setEvaluator(new BinaryClassificationEvaluator())
  57. .setEstimatorParamMaps(paramGrid).setNumFolds(2); // Use 3+ in practice
  58. // Run cross-validation, and choose the best set of parameters.
  59. CrossValidatorModel cvModel = cv.fit(training);
  60. // Prepare test documents, which are unlabeled.
  61. Dataset<Row> test = spark.createDataFrame(Arrays.asList(
  62. new JavaDocument(4L, "spark i j k"),
  63. new JavaDocument(5L, "l m n"),
  64. new JavaDocument(6L, "mapreduce spark"),
  65. new JavaDocument(7L, "apache hadoop")
  66. ), JavaDocument.class);
  67. // Make predictions on test documents. cvModel uses the best model found (lrModel).
  68. Dataset<Row> predictions = cvModel.transform(test);
  69. for (Row r : predictions.select("id", "text", "probability", "prediction").collectAsList()) {
  70. System.out.println("(" + r.get(0) + ", " + r.get(1) + ") --> prob=" + r.get(2)
  71. + ", prediction=" + r.get(3));
  72. }

Python:

  1. from pyspark.ml import Pipeline
  2. from pyspark.ml.classification import LogisticRegression
  3. from pyspark.ml.evaluation import BinaryClassificationEvaluator
  4. from pyspark.ml.feature import HashingTF, Tokenizer
  5. from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
  6. # Prepare training documents, which are labeled.
  7. training = spark.createDataFrame([
  8. (0, "a b c d e spark", 1.0),
  9. (1, "b d", 0.0),
  10. (2, "spark f g h", 1.0),
  11. (3, "hadoop mapreduce", 0.0),
  12. (4, "b spark who", 1.0),
  13. (5, "g d a y", 0.0),
  14. (6, "spark fly", 1.0),
  15. (7, "was mapreduce", 0.0),
  16. (8, "e spark program", 1.0),
  17. (9, "a e c l", 0.0),
  18. (10, "spark compile", 1.0),
  19. (11, "hadoop software", 0.0)
  20. ], ["id", "text", "label"])
  21. # Configure an ML pipeline, which consists of tree stages: tokenizer, hashingTF, and lr.
  22. tokenizer = Tokenizer(inputCol="text", outputCol="words")
  23. hashingTF = HashingTF(inputCol=tokenizer.getOutputCol(), outputCol="features")
  24. lr = LogisticRegression(maxIter=10)
  25. pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])
  26. # We now treat the Pipeline as an Estimator, wrapping it in a CrossValidator instance.
  27. # This will allow us to jointly choose parameters for all Pipeline stages.
  28. # A CrossValidator requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
  29. # We use a ParamGridBuilder to construct a grid of parameters to search over.
  30. # With 3 values for hashingTF.numFeatures and 2 values for lr.regParam,
  31. # this grid will have 3 x 2 = 6 parameter settings for CrossValidator to choose from.
  32. paramGrid = ParamGridBuilder() \
  33. .addGrid(hashingTF.numFeatures, [10, 100, 1000]) \
  34. .addGrid(lr.regParam, [0.1, 0.01]) \
  35. .build()
  36. crossval = CrossValidator(estimator=pipeline,
  37. estimatorParamMaps=paramGrid,
  38. evaluator=BinaryClassificationEvaluator(),
  39. numFolds=2) # use 3+ folds in practice
  40. # Run cross-validation, and choose the best set of parameters.
  41. cvModel = crossval.fit(training)
  42. # Prepare test documents, which are unlabeled.
  43. test = spark.createDataFrame([
  44. (4, "spark i j k"),
  45. (5, "l m n"),
  46. (6, "mapreduce spark"),
  47. (7, "apache hadoop")
  48. ], ["id", "text"])
  49. # Make predictions on test documents. cvModel uses the best model found (lrModel).
  50. prediction = cvModel.transform(test)
  51. selected = prediction.select("id", "text", "probability", "prediction")
  52. for row in selected.collect():
  53. print(row)

训练验证分裂

       除了交叉验证以外,Spark还提供训练验证分裂用以超参数调整。和交叉验证评估K次不同,训练验证分裂只对每组参数评估一次。因此它计算代价更低,但当训练数据集不是足够大时,其结果可靠性不高。

      与交叉验证不同,训练验证分裂仅需要一个训练数据与验证数据对。使用训练比率参数将原始数据划分为两个部分。如当训练比率为0.75时,训练验证分裂使用75%数据以训练,25%数据以验证。

     与交叉验证相同,确定最佳参数表后,训练验证分裂最后使用最佳参数表基于全部数据来重新拟合估计器。

示例:

Scala:

import org.apache.spark.ml.evaluation.RegressionEvaluator
import org.apache.spark.ml.regression.LinearRegression
import org.apache.spark.ml.tuning.{ParamGridBuilder, TrainValidationSplit}

// Prepare training and test data.
val data = spark.read.format("libsvm").load("data/mllib/sample_linear_regression_data.txt")
val Array(training, test) = data.randomSplit(Array(0.9, 0.1), seed = 12345)

val lr = new LinearRegression()

// We use a ParamGridBuilder to construct a grid of parameters to search over.
// TrainValidationSplit will try all combinations of values and determine best model using
// the evaluator.
val paramGrid = new ParamGridBuilder()
  .addGrid(lr.regParam, Array(0.1, 0.01))
  .addGrid(lr.fitIntercept)
  .addGrid(lr.elasticNetParam, Array(0.0, 0.5, 1.0))
  .build()

// In this case the estimator is simply the linear regression.
// A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
val trainValidationSplit = new TrainValidationSplit()
  .setEstimator(lr)
  .setEvaluator(new RegressionEvaluator)
  .setEstimatorParamMaps(paramGrid)
  // 80% of the data will be used for training and the remaining 20% for validation.
  .setTrainRatio(0.8)

// Run train validation split, and choose the best set of parameters.
val model = trainValidationSplit.fit(training)

// Make predictions on test data. model is the model with combination of parameters
// that performed best.
model.transform(test)
  .select("features", "label", "prediction")
  .show()
Java:

  1. import org.apache.spark.ml.evaluation.RegressionEvaluator;
  2. import org.apache.spark.ml.param.ParamMap;
  3. import org.apache.spark.ml.regression.LinearRegression;
  4. import org.apache.spark.ml.tuning.ParamGridBuilder;
  5. import org.apache.spark.ml.tuning.TrainValidationSplit;
  6. import org.apache.spark.ml.tuning.TrainValidationSplitModel;
  7. import org.apache.spark.sql.Dataset;
  8. import org.apache.spark.sql.Row;
  9. Dataset<Row> data = spark.read().format("libsvm")
  10. .load("data/mllib/sample_linear_regression_data.txt");
  11. // Prepare training and test data.
  12. Dataset<Row>[] splits = data.randomSplit(new double[] {0.9, 0.1}, 12345);
  13. Dataset<Row> training = splits[0];
  14. Dataset<Row> test = splits[1];
  15. LinearRegression lr = new LinearRegression();
  16. // We use a ParamGridBuilder to construct a grid of parameters to search over.
  17. // TrainValidationSplit will try all combinations of values and determine best model using
  18. // the evaluator.
  19. ParamMap[] paramGrid = new ParamGridBuilder()
  20. .addGrid(lr.regParam(), new double[] {0.1, 0.01})
  21. .addGrid(lr.fitIntercept())
  22. .addGrid(lr.elasticNetParam(), new double[] {0.0, 0.5, 1.0})
  23. .build();
  24. // In this case the estimator is simply the linear regression.
  25. // A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
  26. TrainValidationSplit trainValidationSplit = new TrainValidationSplit()
  27. .setEstimator(lr)
  28. .setEvaluator(new RegressionEvaluator())
  29. .setEstimatorParamMaps(paramGrid)
  30. .setTrainRatio(0.8); // 80% for training and the remaining 20% for validation
  31. // Run train validation split, and choose the best set of parameters.
  32. TrainValidationSplitModel model = trainValidationSplit.fit(training);
  33. // Make predictions on test data. model is the model with combination of parameters
  34. // that performed best.
  35. model.transform(test)
  36. .select("features", "label", "prediction")
  37. .show();
Python:

  1. from pyspark.ml.evaluation import RegressionEvaluator
  2. from pyspark.ml.regression import LinearRegression
  3. from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit
  4. # Prepare training and test data.
  5. data = spark.read.format("libsvm")\
  6. .load("data/mllib/sample_linear_regression_data.txt")
  7. train, test = data.randomSplit([0.7, 0.3])
  8. lr = LinearRegression(maxIter=10, regParam=0.1)
  9. # We use a ParamGridBuilder to construct a grid of parameters to search over.
  10. # TrainValidationSplit will try all combinations of values and determine best model using
  11. # the evaluator.
  12. paramGrid = ParamGridBuilder()\
  13. .addGrid(lr.regParam, [0.1, 0.01]) \
  14. .addGrid(lr.elasticNetParam, [0.0, 0.5, 1.0])\
  15. .build()
  16. # In this case the estimator is simply the linear regression.
  17. # A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
  18. tvs = TrainValidationSplit(estimator=lr,
  19. estimatorParamMaps=paramGrid,
  20. evaluator=RegressionEvaluator(),
  21. # 80% of the data will be used for training, 20% for validation.
  22. trainRatio=0.8)
  23. # Run TrainValidationSplit, and choose the best set of parameters.
  24. model = tvs.fit(train)
  25. # Make predictions on test data. model is the model with combination of parameters
  26. # that performed best.
  27. prediction = model.transform(test)
  28. for row in prediction.take(5):
  29. print(row)

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/菜鸟追梦旅行/article/detail/547113
推荐阅读
相关标签
  

闽ICP备14008679号