赞
踩
本文是我系列文章的第四篇,涵盖了我在 2023 年 3 月为 WomenWhoCode 数据科学跟踪活动提供的会议。早期的文章在这里:第 1 部分(涵盖 NLP 简介)、第 2 部分(涵盖 NLTK 和 SpaCy 库)、第 3 部分(涵盖文本预处理技术)
One-Hot 编码:
这是将文本表示为数字向量的最简单技术。每个单词都表示为 0 和 1 的唯一“One-Hot”二进制向量。对于词汇表中的每个唯一单词,向量包含一个 1,其余所有值均为 0,向量中 1 的位置唯一标识一个单词。
例:
单词 Apple、Banana、Orange 和 Mango 的 OneHot 矢量示例
- from sklearn.preprocessing import OneHotEncoder
- import nltk
- from nltk import word_tokenize
- document = "The rose is red. The violet is blue."
- document = document.split()
- tokens = [doc.split(" ") for doc in document]
-
- wordids = {token: idx for idx, token in enumerate(set(document))}
- tokenids = [[wordids[token] for token in toke] for toke in tokens]
-
- onehotmodel = OneHotEncoder()
- vectors = onehotmodel.fit_transform(tokenids)
- print(vectors.todense())
详情请见:https://en.wikipedia.org/wiki/Bag-of-words_model
词袋 (BoW) 是一种无序的文本表示形式,用于描述文档中单词的出现情况。它具有文档中已知单词的词汇表和已知单词存在的度量。词袋模型不包含有关文档中单词顺序或结构的任何信息。
来自维基百科的示例:
文件1:约翰喜欢看电影。玛丽也喜欢电影。
文件2:玛丽也喜欢看足球比赛。
词汇1:“John”,“likes”,“to”,“watch”,“movies”,“Mary”,“likes”,“movies”,“too”
词汇2: “Mary”,“also”,“likes”,“to”,“watch”,“football”,“games”
BoW1 = {“John”:1,“likes”:2,“to”:1,“watch”:1,“movies”:2,“Mary”:1,“too”:1};
BoW2 = {“Mary”:1,“also”:1,“likes”:1,“to”:1,“watch”:1,“football”:1,“games”:1};
Document3 是 document1 和 document2 的并集(包含文档 1 和文档 2 中的单词)
文档3:约翰喜欢看电影。玛丽也喜欢电影。玛丽还喜欢看足球比赛。
BoW3: {“John”:1,“likes”:3,“to”:2,“watch”:2,“movies”:2,“Mary”:2,“too”:1,“also”:1,“football”:1,“games”:1}
让我们编写一个函数来预处理文本,然后再用向量表示文本。
# This process_text() function returns list of cleaned tokens of the text import numpy import re import string import unicodedata from nltk.corpus import stopwords from nltk.stem import WordNetLemmatizer stop_words = stopwords.words('english') lemmatizer = WordNetLemmatizer() def process_text(text): # Remove non-ASCII characters text = unicodedata.normalize('NFKD', text).encode('ascii', 'ignore').decode('utf-8', 'ignore') # Remove words not starting with alphabets text = re.sub(r'[^a-zA-Z\s]', '', text) # Remove punctuation marks text = text.translate(str.maketrans('', '', string.punctuation)) #Convert to lower case text = text.lower() # Remove stopwords text = " ".join([word for word in str(text).split() if word not in stop_words]) # Lemmatize text = " ".join([lemmatizer.lemmatize(word) for word in text.split()]) return text
接下来,我们使用 Sklearn 库中的 CountVectorizer 将预处理的文本转换为词袋表示。
#https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html #https://stackoverflow.com/questions/27697766/understanding-min-df-and-max-df-in-scikit-countvectorizer from sklearn.feature_extraction.text import CountVectorizer import pandas as pd import nltk document = ["The", "rose", "is", "red", "The", "violet", "is", "blue"] #, "This is some text, just for demonstration"] processed_document = [process_text(item) for item in document] processed_document = [x for x in processed_document if x != ''] print(processed_document) bow_countvect = CountVectorizer(min_df = 0., max_df = 1.) matrix = bow_countvect.fit_transform(processed_document) matrix.toarray() vocabulary = bow_countvect.get_feature_names_out() print(matrix) matrix.todense()
Simpe Bag-of-words 模型不存储有关单词顺序的信息。n-gram 模型可以存储此空间信息。
单词/标记称为“克”。n-gram 是出现在文本文档中的一组连续的 n 标记。
unigram 表示 1 个单词,bigrams 表示两个单词,trigram 表示一组 3 个单词......
例如,对于文本(来自维基百科):
文件1:约翰喜欢看电影。玛丽也喜欢电影。
二元模型会将文本解析为以下单位,并像在简单的 BoW 模型中一样存储每个单位的术语频率。
[“约翰喜欢”, “喜欢”, “看”, “看电影”, “玛丽喜欢”, “喜欢电影”, “电影也”,]
词袋模型可以看作是 n-gram 模型的一个特例,其中 n=1
- #https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
- from sklearn.feature_extraction.text import CountVectorizer
-
- document = ["The rose is red.", "The violet is blue.", "This is some text, just for demonstration"]
- ngram_countvect = CountVectorizer(ngram_range = (2, 2), stop_words = 'english')
- #ngram_range paramenter to count vectorizer indicates the lower and upper boundary of the range of n-values for
- #different word n-grams or char n-grams to be extracted. All values of n such such that min_n <= n <= max_n will be used.
- #For example an ngram_range of (1, 1) means only unigrams, (1, 2) means unigrams and bigrams, and (2, 2) means only bigrams.
-
- matrix = ngram_countvect.fit_transform(document)
- vocabulary = ngram_countvect.get_feature_names_out()
- matrix.todense()
可以在此处找到对 TF-IDF 矢量化器的非常好的解释
- from sklearn.feature_extraction.text import TfidfVectorizer
-
- document = ["The rose is red.", "The violet is blue.", "This is some text, just for demonstration"]
-
- tf_idf = TfidfVectorizer(min_df = 0., max_df = 1., use_idf = True)
- tf_idf_matrix = tf_idf.fit_transform(document)
- tf_idf_matrix = tf_idf_matrix.toarray()
- tf_idf_matrix
上述文本表示方法通常不能捕捉到单词的语义和上下文。为了克服这些限制,我们使用 Embeddings。嵌入是通过训练大型数据集的模型来学习的。这些嵌入通过考虑句子中的相邻单词和句子中单词的顺序来捕获单词的上下文。三个突出的词嵌入是:Word2Vec、GloVe、FastText
Word2Vec
CBOW 模型 — 尝试根据源上下文词预测当前目标词。Skip Gram 模型尝试预测给定目标词的源上下文词。
from gensim.models import word2vec import nltk document = ["The rose is red.", "The violet is blue.", "This is some text, just for demonstration"] tokenized_corpus = [nltk.word_tokenize(doc) for doc in document] #parameters of word2vec model # feature_size : integer : Word vector dimensionality # window_context : integer : The maximum distance between the current and predicted word within a sentence.(2, 10) # min_word_count : integer : Ignores all words with total absolute frequency lower than this - (2, 100) # sample : integer : The threshold for configuring which higher-frequency words are randomly downsampled. Highly influencial. - (0, 1e-5) # sg: integer: Skip-gram model configuration, CBOW by default wordtovector = word2vec.Word2Vec(tokenized_corpus, window = 3, min_count = 1, sg = 1) print('Embedding of the word blue') print(wordtovector.wv['blue']) print('Size of Embedding of the word blue') print(wordtovector.wv['blue'].shape)
如果您希望在词汇表中查看所有向量,请使用以下代码:
- #All the vectors for all the words in our input text
- words = wordtovector.wv.index_to_key
- wvs = wordtovector.wv[words]
- wvs
或者将它们转换为 pandas 数据帧
- import pandas as pd
- df = pd.DataFrame(wvs, index = words)
- df
import spacy import nltk nlp = spacy.load('en_core_web_lg') total_vectors = len(nlp.vocab.vectors) print('Total word vectors:', total_vectors) document = ["The rose is red.", "The violet is blue.", "This is some text, just for demonstration"] tokenized_corpus = [nltk.word_tokenize(doc) for doc in document] vocab = list(set([word for wordlist in tokenized_corpus for word in wordlist])) glovevectors = np.array([nlp(word).vector for word in vocab])#Spacy's nlp pipeline has the vectors for these words glove_vec_df = pd.DataFrame(glovevectors, index=vocab) glove_vec_df
如果您希望看到单词“violet”的手套向量,请使用代码
glove_vec_df.loc['violet']
想查看词汇的所有向量吗?
glovevectors
使用TSNE可视化数据点
- from sklearn.manifold import TSNE
- import matplotlib.pyplot as plt
- tsne = TSNE(n_components = 2, random_state = 42, n_iter = 250, perplexity = 3)
- tsneglovemodel = tsne.fit_transform(glovevectors)
- labels = vocab
- plt.figure(figsize=(12, 6))
- plt.scatter(tsneglovemodel[:, 0], tsneglovemodel[:, 1], c='red', edgecolors='r')
- for label, x, y in zip(labels, tsneglovemodel[:, 0], tsneglovemodel[:, 1]):
- plt.annotate(label, xy=(x+1, y+1), xytext=(0, 0), textcoords='offset points')
FastText 在 Wikipedia 和 Common Crawl 上进行了训练。它包含在维基百科和爬行上训练的 157 种语言的词向量。它还包含用于语言识别和各种监督任务的模型。您可以在 gensim 库中试验 FastText 向量。
- import warnings
- warnings.filterwarnings("ignore")
-
- from gensim.models.fasttext import FastText
- import nltk
- document = ["The rose is red.", "The violet is blue.", "This is some text, just for demonstration"]
- tokenized_corpus = [nltk.word_tokenize(doc) for doc in document]
-
- fasttext_model = FastText(tokenized_corpus, window = 5, min_count = 1, sg = 1)
- import warnings
- warnings.filterwarnings("ignore")
-
- from gensim.models.fasttext import FastText
- import nltk
- document = ["The rose is red.", "The violet is blue.", "This is some text, just for demonstration"]
- tokenized_corpus = [nltk.word_tokenize(doc) for doc in document]
-
- fasttext_model = FastText(tokenized_corpus, window = 5, min_count = 1, sg = 1)
-
- print('Embedding')
- print(fasttext_model.wv['blue'])
-
- print('Embedding Shape')
- print(fasttext_model.wv['blue'].shape)
要查看词汇表中单词的向量,可以使用以下代码
- words_fasttext = fasttext_model.wv.index_to_key
- wordvectors_fasttext = fasttext_model.wv[words]
- wordvectors_fasttext
在本系列的下一篇文章中,我们将介绍文本分类。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。