赞
踩
1. gensim.similarities.SparseMatrixSimilarity 的三个方法
2. TFIDF 源码浅析
一 训练阶段 输入数据格式:一个列表,列表中的每个元素(也是列表)代表一个文本。每个文本分词后的词语组成的一个列表代表该文本。 生成的模型、tfidf矩阵、文章item_id列表,字典,语料分别保存。
gensim版本的TFIDF模型的创建分为一下5步:
- 1. 生成字典 dictionary = corpora.Dictionary(train)
- 2. 生成语料 corpus = [dictionary.doc2bow(text) for text in train]
- 3. 定义TFIDF模型 tfidf_model = models.TfidfModel(corpus, dictionary=dictionary)
- 4. 用语料训练模型并生成TFIDF矩阵 corpus_tfidf = tfidf_model[corpus]
- 5. 生成余弦相似度索引 index = similarities.SparseMatrixSimilarity(corpus_tfidf, num_features=featurenum) 使用SparseMatrixSimilarity(),可以占用更少的内存和磁盘空间。
- from gensim import corpora,similarities,models
- import jieba
- import pandas as pd
- import pickle
- stopwords = [line.strip() for line in open('./doc/stopword.txt', 'r',encoding='utf-8').readlines()]
- def chinese_word_cut(mytext):
- seg_list = []
- seg_text = jieba.cut(mytext)
- for word in seg_text:
- if word not in stopwords:
- seg_list.append(word)
- return " ".join(seg_list)
- df = pd.read_csv("./doc/corpora.csv",sep='\t',encoding='utf-8')
- t = pd.DataFrame(df['content'].astype(str))
- df["content"] = t['content'].apply(chinese_word_cut)
- train = []
- train_item_id = []
- for i in range(len(df["content"])):
- line = df["content"][i]
- line = line.split()
- train.append([w for w in line])
- train_item_id.append(df["item_id"][i])
- #print(len(train))
- #print(train)
- print(len(train))
- dictionary = corpora.Dictionary(train)
- corpus = [dictionary.doc2bow(text) for text in train]
- # corpus是一个返回bow向量的迭代器。下面代码将完成对corpus中出现的每一个特征的IDF值的统计工作
- tfidf_model = models.TfidfModel(corpus, dictionary=dictionary)
- corpus_tfidf = tfidf_model[corpus]
- dictionary.save('train_dictionary.dict') # 保存生成的词典
- tfidf_model.save('train_tfidf.model')
- corpora.MmCorpus.serialize('train_corpuse.mm', corpus)
- featurenum = len(dictionary.token2id.keys()) # 通过token2id得到特征数
- # 稀疏矩阵相似度,从而建立索引,我们用待检索的文档向量初始化一个相似度计算的对象
- index = similarities.SparseMatrixSimilarity(corpus_tfidf, num_features=featurenum)
- index.save('train_index.index')
- pickle.dump(train_item_id,'item_id.pkl')
二 测试阶段 模型对测试集进行operation;求余弦相似度。对于给定的新文本,找到训练集中最相似的五篇文章作为推荐。
代码说明
- 1 import warnings warnings.filterwarnings(action='ignore',category=UserWarning,module='gensim') 为了不报警告。
- 2 pickle.dump() 报错,需要有wirite属性。改为 from sklearn.externals import joblib。其dump 和load 方式和pickle一致。
- 3 index.get_similarities(test_vec) 返回test_vec 和训练语料中所有文本的余弦相似度。返回结果是个numpy数组
- 4 related_doc_indices = sim.argsort()[:-6:-1] 完成对numpy数组的排序并获取其top5最大值。
- import jieba
- from sklearn.externals import joblib
- import warnings
- warnings.filterwarnings(action='ignore',category=UserWarning,module='gensim')
- from gensim import corpora,similarities,models
-
-
- stopwords = [line.strip() for line in open('./doc/stopword.txt', 'r',encoding='utf-8').readlines()]
- def chinese_word_cut(mytext):
- seg_list = []
- seg_text = jieba.cut(mytext)
- for word in seg_text:
- if word not in stopwords:
- seg_list.append(word)
- return seg_list
-
- # 读取文章
- def readfile(path):
- fp = open(path, "r", encoding="utf-8")
- content = fp.read()
- fp.close()
- return content
-
- doc = readfile('doc/re0.txt')
- test = chinese_word_cut(doc)
- dictionary = corpora.Dictionary.load("train_dictionary.dict")
- tfidf = models.TfidfModel.load("train_tfidf.model")
- index = similarities.SparseMatrixSimilarity.load('train_index.index')
- item_id_list = joblib.load('item_id.pkl')
- corpus = corpora.MmCorpus('train_corpuse.mm')
- print('模型加载完成')
- # 产生BOW向量
- vec = dictionary.doc2bow(test)
- #生成tfidf向量
- test_vec = tfidf[vec]
- # 计算相似度
- sim = index.get_similarities(test_vec)
- related_doc_indices = sim.argsort()[:-6:-1]
- print(related_doc_indices)
- idlist = [] # 保存item_id
- for i in related_doc_indices:
- idlist.append(item_id_list[i])
- print(idlist)
-
-
-
-
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。