赞
踩
最近在看莫烦的NLP的课程,其中关于TF-IDF算法实际编程的时候还是遇到一些小问题,主要是计算方法问题,整理后放上来,加深记忆。
TF-IDF的计算方法有很多种,这里主要用的是SKlearn中的计算方式,根示例代码不太一样,费了点劲儿才搞明白。
目录
2. IDF:Inverse Document Frequency,逆向文本频率
TF-IDF(term frequency–inverse document frequency,词频-逆向文件频率)算法是一种用于信息检索与文本数据挖掘的常用加权技术。它用统计学方法评估一个词对某篇文章的重要程度,常用来提取文章的关键词,算法简单高效,因此常用于信息检索的粗排阶段。
TF-IDF算法的核心思想是通过统计的方法,评估一个词对一个文件集或者语料库的重要程度。一个词的重要程度跟它在文章中出现的次数成正比,跟它在语料库出现的次数成反比。这种计算方式能有效避免常用词对关键词的影响,提高了关键词与文章之间的相关性。
指的是某个词在某篇文章中出现的次数, 计算公式为:TF = 某词在某文档中出现的次数
也就是说,就一篇文章局部来看,一个单词出现的次数越多就越重要,但这并不是绝对的。比如,a、the、of等单词出现的次数一定不会少,很显然它们并没有什么重要信息。所以,我们接下来要引入IDF。
注意:也有 “TF = 某词在某文档中出现的次数 / 该文档的总词量” 这种计算,但SKLEARN中是采用直接计次。
指的是某个词在一个文件集或者语料库中区分力指标。计算公式为:
其中,Nd是训练集文档总数量,df(d,t)是包含某个单词的文档数量,+1的原因是避免分母为0。
也就是说,对一个文件集或者语料库而言,包含某个单词的文档越少,IDF的值越大,这个词的区分力越强,就越重要。
特别需要注意的是,IDF是针对文件集或者语料库而言的。计算机领域的IDF用在医学领域往往是不合适的。
综合考虑以某篇文章为中心的局部信息TF,和以某个语料库全局信息为基础的IDF,得到以下公式:
TF-IDF = TF * IDF
特别注意:
在sklearn中,上述计算的TF-IDF会经过一个欧几里得范数归一化:
以下代码改编自莫烦的NLP课程中的源码。
输入15篇文章,形成一个44个单词的词汇表(去掉两个高频词,a 和 i),计算这15篇文章的tf-idf矩阵。再输入查询语句,计算该语句的tf-idf向量,然后求该语句的tf-idf向量和每一篇文章tf-idf向量的cosin距离,找出距离最近的三篇文章即是搜索结果。
核心思想--向量化。将文章向量化,将待查询语句也向量化,就可以通过计算余弦距离来比较相近程度。注意这里使用两个向量的夹角的余弦值来衡量两个文本间的相似度,而不是常用的欧氏距离,余弦相似度更加注重两个向量在方向上的差异,而不是实际距离差异。
- import numpy as np
- from collections import Counter
- import itertools
- from sklearn import preprocessing
- from plot import show_tfidf
- # from sklearn.metrics.pairwise import cosine_similarity
-
-
- #15 docs
- docs = [
- "it is a good day, I like to stay here",
- "I am happy to be here",
- "I am bob",
- "it is sunny today",
- "I have a party today",
- "it is a dog and that is a cat",
- "there are dog and cat on the tree",
- "I study hard this morning",
- "today is a good day",
- "tomorrow will be a good day",
- "I like coffee, I like book and I like apple",
- "I do not like it",
- "I am kitty, I like bob",
- "I do not care who like bob, but I like kitty",
- "It is coffee time, bring your cup",
- ]
-
- #vocablist包括44 words 去掉两个超高频单词
- docs_words = [d.lower().replace(",", "").split(" ") for d in docs]
- wordlist = list(itertools.chain(*docs_words)) #遍历对象,去除内嵌,为什么需要加*没细看
- vocablist = list(set(wordlist))
- vocablist.sort(key=wordlist.index) #转set去重,保持原序 => 全部单词表
- vocablist.remove('a') #为了根sklearn保持一致,去掉两个超高频单词
- vocablist.remove('i')
- #print(vocablist)
-
- v2i = {v: i for i, v in enumerate(vocablist)} #给单词编索引,eg: 'tree': 0 #enumerate函数 index, value
- i2v = {i: v for v, i in v2i.items()} #逆索引,eg: 0: 'tree' #items函数,value, index
- #print(v2i)
- #print(i2v)
-
-
- # tf = 每个单词出现频率
- def get_tf():
- # term frequency: how frequent a word appears in a doc
- _tf = np.zeros((len(vocablist), len(docs)), dtype=np.float64) # [n_vocab, n_doc] =》 [44 * 15]矩阵
- for i, d in enumerate(docs_words): #循环每篇文章
- counter = Counter(d)
- for v in counter.keys(): #统计每篇文章单词计数
- if v in v2i:
- _tf[v2i[v], i] = counter[v] #每个单词出现频率
- return _tf
-
-
- # idf = 1 + np.log((len(docs) + 1) / (该单词在几篇文章中出现 + 1))
- def get_idf(method="sklearn"):
- # inverse document frequency: low idf for a word appears in more docs, mean less important
- df = np.zeros((len(i2v), 1))
- for i in range(len(i2v)): #循环词汇表每一个单词
- d_count = 0
- for d in docs_words:
- d_count += 1 if i2v[i] in d else 0 #该单词在几篇文章中出现
- df[i, 0] = d_count
-
- idf_fn = lambda x: 1 + np.log((len(docs) + 1) / (x+1))
- if idf_fn is None:
- raise ValueError
- return idf_fn(df)
-
-
- def cosine_similarity(_tf_idf, q):
- unit_ds = _tf_idf / np.sqrt(np.sum(np.square(_tf_idf)))
- unit_q = q / np.sqrt(np.sum(np.square(q)))
- similarity = unit_ds.T.dot(unit_q).ravel()
- return similarity
-
-
- #根据输入,比较每篇文章的相似度,不考虑输入句子中新加入的单词
- def docs_score(q):
- q_words = q.replace(",", "").split(" ")
- counter = Counter(q_words)
- q_tf = np.zeros((len(idf), 1), dtype=float)
- for v in counter.keys():
- if v in v2i:
- q_tf[v2i[v], 0] = counter[v] # 每个单词出现频率
-
- q_vec = q_tf * idf
- q_tf_idf = preprocessing.normalize(q_vec, norm='l2', axis=0) # 欧几里得范数归一化
-
- #q_scores = cosine_similarity(tf_idf.transpose(), q_tf_idf.transpose()) #如果用库中函数,就用归一化后的
- q_scores = cosine_similarity(origin_tf_idf, q_vec) #如果自已写,就用未归一化的
-
- return q_scores
-
-
- #获得tf_idf最高的n个词
- def get_keywords(n=2):
- for c in range(15):
- col = tf_idf[:, c]
- idx = np.argsort(col)[-n:][::-1] #从小到大排列,提取其对应的index, 从后向前反向取
- print("doc{}, top{} keywords {}".format(c, n, [i2v[i] for i in idx])) #TOP2,TOP1
-
-
-
-
- #----------TEST
- tf = get_tf() # [n_vocab, n_doc] 44*15
- idf = get_idf() # [n_vocab, 1] 44*1
- origin_tf_idf = tf * idf # [n_vocab, n_doc] 44*15
- tf_idf = preprocessing.normalize(origin_tf_idf, norm='l2', axis=0) #欧几里得范数归一化
-
- print("\ntf samples:\n", tf[:2])
- print("\nidf sample:\n", idf[:2])
- print("\ntf_idf sample:\n", tf_idf[:2])
-
- #--- 提取关键词
- get_keywords()
-
- #--- 搜索最相似的句子
- q = "I get a coffee cup"
- scores = docs_score(q)
- d_ids = scores.ravel().argsort()[-3:][::-1]
- print("\ntop 3 docs for '{}':\n{}".format(q, [docs[i] for i in d_ids]))
-
- show_tfidf(tf_idf.T, [i2v[i] for i in range(tf_idf.shape[0])], "tfidf_matrix")
-
- from sklearn.feature_extraction.text import TfidfVectorizer
- from sklearn.metrics.pairwise import cosine_similarity
- from plot import show_tfidf
-
-
- docs = [
- "it is a good day, I like to stay here",
- "I am happy to be here",
- "I am bob",
- "it is sunny today",
- "I have a party today",
- "it is a dog and that is a cat",
- "there are dog and cat on the tree",
- "I study hard this morning",
- "today is a good day",
- "tomorrow will be a good day",
- "I like coffee, I like book and I like apple",
- "I do not like it",
- "I am kitty, I like bob",
- "I do not care who like bob, but I like kitty",
- "It is coffee time, bring your cup",
- ]
-
- vectorizer = TfidfVectorizer()
- tf_idf = vectorizer.fit_transform(docs)
- # print("idf: ", [(n, idf) for idf, n in zip(vectorizer.idf_, vectorizer.get_feature_names())])
- # print("v2i: ", vectorizer.vocabulary_)
- # print(tf_idf)
-
-
- q = "I get a coffee cup"
- qtf_idf = vectorizer.transform([q])
-
- res = cosine_similarity(tf_idf, qtf_idf)
- res = res.ravel().argsort()[-3:]
- print("\ntop 3 docs for '{}':\n{}".format(q, [docs[i] for i in res[::-1]]))
-
- i2v = {i: v for v, i in vectorizer.vocabulary_.items()}
- dense_tfidf = tf_idf.todense() #tf_idf为稀疏矩阵
- show_tfidf(dense_tfidf, [i2v[i] for i in range(dense_tfidf.shape[1])], "tfidf_sklearn_matrix")
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。