赞
踩
tf:term frequency
idf;inverse document frequency
TF=某个词在文章中出现的次数/文章的总次数
或
TF=某个词在文章中出现的次数/该问出现次数最多的词出现的次数
IDF=log(语料库的文档总数/(包含该词的文档数+1))
TF-IDF=TF*IDF
方法1:基于gensim的计算
- from gensim.models import TfidfModel
- from pprint import pprint
- from gensim.corpora import Dictionary
-
- data_set = [["tag1","tag2","tag3"],["tag2","tag2","tag3"],["tag1","tag4","tag3"]]
- dct = Dictionary(data_set)
- corpus = [dct.doc2bow(line) for line in data_set]
- pprint(corpus)
- model = TfidfModel(corpus)
- model[corpus[0]]
-
- [(0, 0.7071067811865476), (1, 0.7071067811865476)]
方法2:基于scikit-learn的tfidf
- from sklearn.feature_extraction.text import TfidfVectorizer
- tfidf_vec = TfidfVectorizer()
- # stop words自定义停用词表,为列表List类型
- # token_pattern过滤规则,正则表达式,如r"(?u)bw+b
- # max_df=0.5,代表一个单词在 50% 的文档中都出现过了,那么它只携带了非常少的信息,因此就不作为分词统计
- documents = [
- 'this is the bayes document',
- 'this is the second second document',
- 'and the third one',
- 'is this the document'
- ]
- tfidf_matrix = tfidf_vec.fit_transform(documents)
- # 拟合模型,并返回文本矩阵 表示了每个单词在每个文档中的 TF-IDF 值
- print('输出每个单词在每个文档中的 TF-IDF 值,向量里的顺序是按照词语的 id 顺序来的:', '\n', tfidf_matrix.toarray())
- print('不重复的词:', tfidf_vec.get_feature_names())
- print('输出每个单词对应的 id 值:', tfidf_vec.vocabulary_)
- print('返回idf值:', tfidf_vec.idf_)
- print('返回停用词表:', tfidf_vec.stop_words_)

Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。