赞
踩
Spark MLlib 提供三种文本特征提取方法,分别为TF-IDF、Word2Vec以及CountVectorizer其各自原理与调用代码整理如下:
TF-IDF
算法介绍:
词频-逆向文件频率(TF-IDF)是一种在文本挖掘中广泛使用的特征向量化方法,它可以体现一个文档中词语在语料库中的重要程度。
词语由t表示,文档由
d表示,语料库由
D表示。词频TF(t,,d)
是词语
t在文档
d中出现的次数。文件频率DF(t,D)是包含词语
的文档的个数。如果我们只使用词频来衡量重要性,很容易过度强调在文档中经常出现而并没有包含太多与文档有关的信息的词语,比如“a”,“the”以及“of”。如果一个词语经常出现在语料库中,它意味着它并没有携带特定的文档的特殊信息。逆向文档频率数值化衡量词语提供多少信息:
其中,|D|是语料库中的文档总数。由于采用了对数,如果一个词出现在所有的文件,其
IDF值变为0。
在下面的代码段中,我们以一组句子开始。首先使用分解器Tokenizer把句子划分为单个词语。对每一个句子(词袋),我们使用HashingTF将句子转换为特征向量,最后使用IDF重新调整特征向量。这种转换通常可以提高使用文本特征的性能。然后,我们的特征向量可以在算法学习中[plain]view plaincopy
Word2Vec
算法介绍:
Word2vec是一个Estimator,它采用一系列代表文档的词语来训练word2vecmodel。该模型将每个词语映射到一个固定大小的向量。word2vecmodel使用文档中每个词语的平均数来将文档转换为向量,然后这个向量可以作为预测的特征,来计算文档相似度计算等等。
在下面的代码段中,我们首先用一组文档,其中每一个文档代表一个词语序列。对于每一个文档,我们将其转换为一个特征向量。此特征向量可以被传递到一个学习算法。
调用:
Scala:
-
<span style=
"font-family:SimSun;font-size:14px;">
import org.apache.spark.ml.feature.Word2Vec
-
-
// Input data: Each row is a bag of words from a sentence or document.
-
val documentDF = spark.createDataFrame(
Seq(
-
"Hi I heard about Spark".
split(
" "),
-
"I wish Java could use case classes".
split(
" "),
-
"Logistic regression models are neat".
split(
" ")
-
).
map(
Tuple1.apply)).toDF(
"text")
-
-
// Learn a mapping from words to Vectors.
-
val word2Vec = new
Word2Vec()
-
.setInputCol(
"text")
-
.setOutputCol(
"result")
-
.setVectorSize(
3)
-
.setMinCount(
0)
-
val model = word2Vec.fit(documentDF)
-
val result = model.transform(documentDF)
-
result.select(
"result").take(
3).foreach(
println)</span>
-
import java.util.Arrays;
-
import java.util.List;
-
-
import org.apache.spark.ml.feature.Word2Vec;
-
import org.apache.spark.ml.feature.Word2VecModel;
-
import org.apache.spark.sql.Dataset;
-
import org.apache.spark.sql.Row;
-
import org.apache.spark.sql.RowFactory;
-
import org.apache.spark.sql.SparkSession;
-
import org.apache.spark.sql.types.*;
-
-
// Input data: Each row is a bag of words from a sentence or document.
-
List<Row> data = Arrays.asList(
-
RowFactory.create(Arrays.asList(
"Hi I heard about Spark".split(
" "))),
-
RowFactory.create(Arrays.asList(
"I wish Java could use case classes".split(
" "))),
-
RowFactory.create(Arrays.asList(
"Logistic regression models are neat".split(
" ")))
-
);
-
StructType schema =
new StructType(
new StructField[]{
-
new StructField(
"text",
new ArrayType(DataTypes.StringType,
true),
false, Metadata.empty())
-
});
-
Dataset<Row> documentDF = spark.createDataFrame(data, schema);
-
-
// Learn a mapping from words to Vectors.
-
Word2Vec word2Vec =
new Word2Vec()
-
.setInputCol(
"text")
-
.setOutputCol(
"result")
-
.setVectorSize(
3)
-
.setMinCount(
0);
-
Word2VecModel model = word2Vec.fit(documentDF);
-
Dataset<Row> result = model.transform(documentDF);
-
for (Row r : result.select(
"result").takeAsList(
3)) {
-
System.out.println(r);
-
}
-
from pyspark.ml.feature import Word2Vec
-
-
# Input data: Each row is a bag of words from a sentence or document.
-
documentDF = spark.createDataFrame([
-
(
"Hi I heard about Spark".split(
" "), ),
-
(
"I wish Java could use case classes".split(
" "), ),
-
(
"Logistic regression models are neat".split(
" "), )
-
], [
"text"])
-
# Learn a mapping from words to Vectors.
-
word2Vec = Word2Vec(vectorSize=
3, minCount=
0, inputCol=
"text", outputCol=
"result")
-
model = word2Vec.fit(documentDF)
-
result = model.transform(documentDF)
-
for feature
in result.select(
"result").take(
3):
-
print(feature)
Countvectorizer
算法介绍:
Countvectorizer和Countvectorizermodel旨在通过计数来将一个文档转换为向量。当不存在先验字典时,Countvectorizer可作为Estimator来提取词汇,并生成一个Countvectorizermodel。该模型产生文档关于词语的稀疏表示,其表示可以传递给其他算法如LDA。
在fitting过程中,countvectorizer将根据语料库中的词频排序选出前vocabsize个词。一个可选的参数minDF也影响fitting过程中,它指定词汇表中的词语在文档中最少出现的次数。另一个可选的二值参数控制输出向量,如果设置为真那么所有非零的计数为1。这对于二值型离散概率模型非常有用。
示例:
假设我们有如下的DataFrame包含id和texts两列:
id | texts
—-|———-
0 |Array(“a”, “b”, “c”)
1 |Array(“a”, “b”, “b”, “c”,”a”)
文本中的每一行都是一个文档类型的数组(字符串)。调用的CountVectorizer产生词汇(a,b,c)的CountVectorizerModel,转换后的输出向量如下:
id | texts | vector
—-|———————————|—————
0 |Array(“a”, “b”, “c”) | (3,[0,1,2],[1.0,1.0,1.0])
1 |Array(“a”, “b”, “b”, “c”,”a”) |(3,[0,1,2],[2.0,2.0,1.0])
每个向量代表文档的词汇表中每个词语出现的次数。
调用:
Scala:
-
<span style=
"font-family:SimSun;font-size:14px;">
import org.apache.spark.ml.feature.{CountVectorizer, CountVectorizerModel}
-
-
val df = spark.createDataFrame(Seq(
-
(
0,
Array(
"a",
"b",
"c")),
-
(
1,
Array(
"a",
"b",
"b",
"c",
"a"))
-
)).toDF(
"id",
"words")
-
-
// fit a CountVectorizerModel from the corpus
-
val cvModel: CountVectorizerModel =
new CountVectorizer()
-
.setInputCol(
"words")
-
.setOutputCol(
"features")
-
.setVocabSize(
3)
-
.setMinDF(
2)
-
.fit(df)
-
-
// alternatively, define CountVectorizerModel with a-priori vocabulary
-
val cvm =
new CountVectorizerModel(
Array(
"a",
"b",
"c"))
-
.setInputCol(
"words")
-
.setOutputCol(
"features")
-
-
cvModel.transform(df).select(
"features").show()<
/span>
-
import java.util.Arrays;
-
import java.util.List;
-
-
import org.apache.spark.ml.feature.CountVectorizer;
-
import org.apache.spark.ml.feature.CountVectorizerModel;
-
import org.apache.spark.sql.Dataset;
-
import org.apache.spark.sql.Row;
-
import org.apache.spark.sql.RowFactory;
-
import org.apache.spark.sql.SparkSession;
-
import org.apache.spark.sql.types.*;
-
-
// Input data: Each row is a bag of words from a sentence or document.
-
List<Row> data = Arrays.asList(
-
RowFactory.create(Arrays.asList(
"a",
"b",
"c")),
-
RowFactory.create(Arrays.asList(
"a",
"b",
"b",
"c",
"a"))
-
);
-
StructType schema =
new StructType(
new StructField [] {
-
new StructField(
"text",
new ArrayType(DataTypes.StringType,
true),
false, Metadata.empty())
-
});
-
Dataset<Row> df = spark.createDataFrame(data, schema);
-
-
// fit a CountVectorizerModel from the corpus
-
CountVectorizerModel cvModel =
new CountVectorizer()
-
.setInputCol(
"text")
-
.setOutputCol(
"feature")
-
.setVocabSize(
3)
-
.setMinDF(
2)
-
.fit(df);
-
-
// alternatively, define CountVectorizerModel with a-priori vocabulary
-
CountVectorizerModel cvm =
new CountVectorizerModel(
new String[]{
"a",
"b",
"c"})
-
.setInputCol(
"text")
-
.setOutputCol(
"feature");
-
-
cvModel.transform(df).show();
-
from pyspark.ml.feature import CountVectorizer
-
-
# Input data: Each row is a bag of words with a ID.
-
df = spark.createDataFrame([
-
(
0,
"a b c".split(
" ")),
-
(
1,
"a b b c a".split(
" "))
-
], [
"id",
"words"])
-
-
# fit a CountVectorizerModel from the corpus.
-
cv = CountVectorizer(inputCol=
"words", outputCol=
"features", vocabSize=
3, minDF=
2.0)
-
model = cv.fit(df)
-
result = model.transform(df)
-
result.show()
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。