当前位置:   article > 正文

自然语言处理(NLP) 六:主题建模_nlp六 model

nlp六 model
import warnings
warnings.filterwarnings('ignore',category=UserWarning)
import nltk.tokenize as tk 
import nltk.corpus as nc 
import nltk.stem.snowball as sb 
import gensim.models.ldamodel as gm 
import gensim.corpora as gc 

doc = []
with open('/Users/youkechaung/Desktop/算法/数据分析/AI/day02/day02/data/topic.txt','r') as f:
    for line in f.readlines():
        doc.append(line[:-1])
        #除去了换行符
tokenizer = tk.RegexpTokenizer(r'\w+')
#建立一个正则表达式的分词器,以单词的形式进行拆词,但是不要标点符号
#the  as 等词为stopwords,高频但是不重要
stopwords = nc.stopwords.words('english')
stemmer = sb.SnowballStemmer('english')
lines_tokens = []
for line in doc:
    tokens = tokenizer.tokenize(line.lower())
    line_tokens = []
    for token in tokens:
        if token not in stopwords:
            token = stemmer.stem(token)
            line_tokens.append(token)
    lines_tokens.append(line_tokens)
dic = gc.Dictionary(lines_tokens)
bow = []
for line_tokens in lines_tokens:
    row = dic.doc2bow(line_tokens)
    bow.append(row)
print(bow)

n_topics = 2
#隐含狄利克雷分布,主题建模器
model = gm.LdaModel(bow,num_topics=n_topics,id2word=dic,passes=25)
topics = model.print_topics(num_topics=n_topics,num_words=4)
print(topics)


























  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/397530
推荐阅读
相关标签
  

闽ICP备14008679号