赞
踩
2019 Stata & Python 实证计量与爬虫分析暑期工作坊还有几天就要开始了。之前在公众号里分享过好几次LDA话题模型的,但考虑的问题都比较简单。这次我将分享在这个notebook中,将会对以下问题进行实战:
提取话题的关键词
gridsearch寻找最佳模型参数
可视化话题模型
预测新输入的文本的话题
如何查看话题的特征词组
如何获得每个话题的最重要的n个特征词
1.导入数据
这里我们使用的20newsgroups数据集
import
pandas
as
pd
df
=
pd
.
read_json
(
'newsgroups.json'
)
df
.
head
()
查看target_names有哪些类别
df
.
target_names
.
unique
()
Run
array
([
'rec.autos'
,
'comp.sys.mac.hardware'
,
'rec.motorcycles'
,
'misc.forsale'
,
'comp.os.ms-windows.misc'
,
'alt.atheism'
,
'comp.graphics'
,
'rec.sport.baseball'
,
'rec.sport.hockey'
,
'sci.electronics'
,
'sci.space'
,
'talk.politics.misc'
,
'sci.med'
,
'talk.politics.mideast'
,
'soc.religion.christian'
,
'comp.windows.x'
,
'comp.sys.ibm.pc.hardware'
,
'talk.politics.guns'
,
'talk.religion.misc'
,
'sci.crypt'
],
dtype
=
object
)
2.英文清洗数据
使用正则表达式去除邮件和换行等多余空白字符
使用gensim库的simple_preprocess分词,得到词语列表
注意:
nltk和spacy安装配置比较麻烦,可以看这篇文章。
自然语言处理库nltk、spacy安装及配置方法其中nltk语料库和spacy的英文模型均已放置在教程文件夹内~
import
nltk
import
gensim
from
nltk
import
pos_tag
import
re
from
nltk
.
corpus
import
stopwords
#导入spacy的模型
nlp
=
spacy
.
load
(
'en_core_web_sm'
,
disable
=[
'parser'
,
'ner'
])
def
clean_text
(
text
,
allowed_postags
=[
'NOUN'
,
'ADJ'
,
'VERB'
,
'ADV'
]):
text
=
re
.
sub
(
'\S*@\S*\s?'
,
''
,
text
)
#去除邮件
text
=
re
.
sub
(
'\s+'
,
' '
,
text
)
#将连续空格、换行、制表符 替换为 空格
#deacc=True可以将某些非英文字母转化为英文字母,例如
#"Šéf chomutovských komunistů dostal poštou bílý prášek"转化为
#u'Sef chomutovskych komunistu dostal postou bily prasek'
words
=
gensim
.
utils
.
simple_preprocess
(
text
,
deacc
=
True
)
#可以在此处加入去停词操作
stpwords
=
stopwords
.
words
(
'english'
)
#保留词性为'NOUN', 'ADJ', 'VERB', 'ADV'词语
doc
=
nlp
(
' '
.
join
(
words
))
text
=
" "
.
join
([
token
.
lemma_
if
token
.
lemma_
not
in
[
'-PRON-'
]
else
''
for
token
in
doc
if
token
.
pos_
in
allowed_postags
])
return
text
test
=
"From: lerxst@wam.umd.edu (where's my thing)\nSubject: WHAT car is this!?\nNntp-Posting-Host: rac3.wam.umd.edu\nOrganization: University of Maryland, College Park\nLines: 15\n\n I was wondering if anyone out there could enlighten me on this car I saw\nthe other day. It was a 2-door sports car, looked to be from the late 60s/\nearly 70s. It was called a Bricklin. The doors were really small. In addition,\nthe front bumper was separate from the rest of the body. This is \nall I know. If anyone can tellme a model name, engine specs, years\nof production, where this car is made, history, or whatever info you\nhave on this funky looking car, please e-mail.\n\nThanks,\n- IL\n ---- brought to you by your neighborhood Lerxst ----\n\n\n\n\n"
clean_text
(
test
)
Run
'where thing subject car be nntp post host rac wam umd edu organization university maryland college park line be wonder anyone out there could enlighten car see other day be door sport car look be late early be call bricklin door be really small addition front bumper be separate rest body be know anyone can tellme model name engine spec year production where car be make history info have funky look car mail thank bring neighborhood lerxst'
将将数据content列进行批处理(数据清洗clean_text)
df
.
content
=
df
.
content
.
apply
(
clean_text
)
df
.
head
()
3. 构建文档词频矩阵 document-word matrix
from
sklearn
.
feature_extraction
.
text
import
TfidfVectorizer
,
CountVectorizer
#vectorizer = TfidfVectorizer(min_df=10)#单词至少出现在10个文档中
vectorizer
=
CountVectorizer
(
analyzer
=
'word'
,
min_df
=
10
,
# minimum reqd occurences of a word
lowercase
=
True
,
# convert all words to lowercase
token_pattern
=
'[a-zA-Z0-9]{3,}'
,
# num chars > 3
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。