赞
踩
LDA(Latent Dirichlet Allocation)是一种文档主题生成模型,也称为一个三层贝叶斯概率模型,包含词、主题和文档三层结构。所谓生成模型,就是说,我们认为一篇文章的每个词都是通过“以一定概率选择了某个主题,并从这个主题中以一定概率选择某个词语”这样一个过程得到。文档到主题服从多项式分布,主题到词服从多项式分布。[1]
LDA是一种非监督机器学习技术,可以用来识别大规模文档集(documentcollection)或语料库(corpus)中潜藏的主题信息。它采用了词袋(bag of words)的方法,这种方法将每一篇文档视为一个词频向量,从而将文本信息转化为了易于建模的数字信息。但是词袋方法没有考虑词与词之间的顺序,这简化了问题的复杂性,同时也为模型的改进提供了契机。每一篇文档代表了一些主题所构成的一个概率分布,而每一个主题又代表了很多单词所构成的一个概率分布。
聚类,显示出高权重的主题。词
有em和online两种方式,不同方式设置的参数和结果不同。
Model有两个参数likelihood(越大越好)和Perplexity(越小越好)
- package spark.mllib
-
- import org.apache.spark.ml.Pipeline
- import org.apache.spark.ml.feature.{Normalizer, PCA}
- import org.apache.spark.ml.linalg.{Vector, Vectors}
- import org.apache.spark.mllib.linalg.{Vector, Vectors}
- import org.apache.spark.sql.functions.{col, udf}
- import org.apache.spark.sql.types.{ArrayType, StringType, StructField, StructType}
- import org.apache.spark.sql.{Column, DataFrame, Row, SparkSession}
- import org.apache.spark.{SparkConf, SparkContext}
-
- import scala.collection.mutable
- import scala.collection.mutable.ArrayBuffer
-
- /**
- * Created by liuwei on 2017/7/24.
- */
- object LDATest {
- def main(args: Array[String]): Unit = {
- import org.apache.spark.ml.clustering.LDA
- import org.apache.spark.ml.linalg.Vector
- import org.apache.spark.ml.linalg.Vectors
-
- val sparkConf = new SparkConf().setAppName("LDATest").setMaster("local[8]")
- val sc = new SparkContext(sparkConf)
- val spark = SparkSession.builder.getOrCreate()
-
- // Loads data.
- val dataset:DataFrame = spark.read.format("libsvm")
- .load("data/mllib/sample_lda_libsvm_data.txt")
-
-
- dataset.show(false)
-
- // Trains a LDA model.
- val lda = new LDA()
- .setK(10)//k: 主题数,或者聚类中心数 >1
- .setMaxIter(10)// MaxIterations:最大迭代次数 >= 0
- // .setCheckpointInterval(1) //迭代计算时检查点的间隔 set checkpoint interval (>= 1) or disable checkpoint (-1)
- .setDocConcentration(0.1) //文章分布的超参数(Dirichlet分布的参数),必需>1.0
- .setTopicConcentration(0.1)//主题分布的超参数(Dirichlet分布的参数),必需>1.0
- .setOptimizer("online") //默认 online 优化计算方法,目前支持"em", "online"
- val model = lda.fit(dataset.select("features"))
-
-
- val ll = model.logLikelihood(dataset)
- val lp = model.logPerplexity(dataset)
- println(s"The lower bound on the log likelihood of the entire corpus: $ll")
- println(s"The upper bound on perplexity: $lp")
-
- val hm2 = new mutable.HashMap[Int,String]
- // val a = sc.textFile("data/mllib/C0_segfeatures.txt").map( x => x.split(",")).map( x =>
- // hm2.put(x(0).replaceAll("\"","").toInt,x(1).replaceAll("\"",""))
- hm2.put()
- // )
- // println(a.count())
- // hm2.put("ok","ok")
-
- // var data = sc.textFile("data/mllib/C0_segfeatures.txt").map( x => x.split(",")).collect()
- // data.foreach{pair => hm2.put(pair(0).replaceAll("\"","").toInt,pair(1).replaceAll("\"",""))}
- // println(hm2+"============")
-
- // val rdd = sc.textFile("data/mllib/C0_segfeatures.txt").map( x => x.split(",")).map( x =>
- // Row(x(0).replaceAll("\"",""),x(1).replaceAll("\"",""))
- // )
- // var data = rdd.collect()
- // data.foreach{pair => hm2.put(pair._1,pair._2)}
-
- // val schema = StructType(
- // Seq(
- // StructField("index",StringType,true)
- // ,StructField("word",StringType,true)
- // )
- // )
- // val wordDataset = spark.createDataFrame(rdd,schema)
-
- val hm = mutable.HashMap(1 -> "b", 2 -> "c",3-> "d", 6 -> "a",9-> "e", 10 -> "f")
-
- // model.l
- val resultUDF = udf((termIndices: mutable.WrappedArray[Integer]) => {//处理第二列输出
- termIndices.map(index=>
- // hm2.get(index)
- index
- )
- })
-
- // Describe topics.
- val topics = model.describeTopics(10)//.withColumn("termIndices", resultUDF(col("termIndices")))
-
-
-
- println(topics.schema)
- // .withColumn("termIndices", resultUDF(col("termIndices"))).withColumn("termWeights", resultUDF(col("termWeights")))
- println("The topics described by their top-weighted terms:")
-
-
- // topics.join(topics, wordDataset("index") === topics("termIndices")).show()
- topics.show(false)
- val cosUDF = udf {
- (vector: Vector) =>
- vector.argmax
- }
-
-
-
- // Shows the result.
- var transformed = model.transform(dataset)
- transformed = transformed.withColumn("prediction",cosUDF(col("topicDistribution")))
- println(transformed.schema)
- transformed.show(false)
- println(" transform start. ").setK(5).fit(df)
-
- val result = pca.transform(df).select("pcaFeatures")
- result.show(false)
- }
-
- }
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。