赞
踩
了解如何在Python中删除停用词与文本标准化,这些是自然语言处理的基本技术
探索不同的方法来删除停用词,以及讨论文本标准化技术,如词干化(stemming)和词形还原(lemmatization)
在Python中使用NLTK,spaCy和Gensim库进行去除停用词和文本标准化
多样化的自然语言处理(NLP)是真的很棒,我们以前从未想象过的事情现在只是几行代码就可做到。这真的令人开心。
但使用文本数据会带来一系列挑战。机器在处理原始文本方面有着较大的困难。在使用NLP技术处理文本数据之前,我们需要执行一些称为预处理的步骤。
错过了这些步骤,我们会得到一个不好的模型。这些是你需要在代码,框架和项目中加入的基本NLP技术。
我们将讨论如何使用一些非常流行的NLP库(NLTK,spaCy,Gensim和TextBlob)删除停用词并在Python中执行文本标准化。
什么是停用词?
为什么我们需要删除停用词?
我们何时应该删除停用词?
删除停用词的不同方法
使用NLTK
使用spaCy
使用Gensim
文本标准化简介
什么是词干化和词形还原?
执行词干化和词形还原的方法
使用NLTK
使用spaCy
使用TextBlob
在任何自然语言中停用词是最常用的词。为了分析文本数据和构建NLP模型,这些停用词可能对构成文档的意义没有太多价值。
通常,英语文本中使用的最常用词是"the","is","in","for","where","when","to","at"等。
考虑这个文本,"There is a pen on the table"。现在,单词"is","a","on"和"the"在解析它时对语句没有任何意义。而像"there","book"和"table"这样的词是关键词,并告诉我们这句话是什么意思。
一般来说在去除停用词之前要执行分词操作。
以下是一份停用词列表,可能对你有用
a about after all also always am an and any are at be been being but by came can cant come
这是一个你必须考虑到的非常重要的问题
在NLP中删除停用词并不是一项严格的规则。这取决于我们正在进行的任务。对于文本分类等(将文本分类为不同的类别)任务,从给定文本中删除或排除停用词,可以更多地关注定义文本含义的词。
正如我们在上一节中看到的那样,单词there,book要比单词is,on来得更加有意义。
但是,在机器翻译和文本摘要等任务中,却不建议删除停用词。
以下是删除停用词的几个主要好处:
在删除停用词时,数据集大小减小,训练模型的时间也减少
删除停用词可能有助于提高性能,因为只剩下更少且唯一有意义的词。因此,它可以提高分类准确性
甚至像Google这样的搜索引擎也会删除停用词,以便从数据库中快速地检索数据
我把它归纳为两个部分:删除停用词的情况以及当我们避免删除停用词的情况。
删除停用词
我们可以在执行以下任务时删除停用词:
文本分类
垃圾邮件过滤
语言分类
体裁(Genre)分类
标题生成
自动标记(Auto-Tag)生成
避免删除停用词
机器翻译
语言建模
文本摘要
问答(QA)系统
1.使用NLTK删除停用词
NLTK是文本预处理的自然语言工具包。这是我最喜欢的Python库之一。NLTK有16种不同语言的停用词列表。
你可以使用以下代码查看NLTK中的停用词列表:
- import nltk
- from nltk.corpus import stopwords
- set(stopwords.words('english'))
现在,要使用NLTK删除停用词,你可以使用以下代码块
- # 下面的代码是使用nltk从句子中去除停用词
-
- # 导入包
- import nltk
- from nltk.corpus import stopwords
- from nltk.tokenize import word_tokenize
- set(stopwords.words('english'))
-
-
- # 例句
- text = """He determined to drop his litigation with the monastry, and relinguish his claims to the wood-cuting and
- fishery rihgts at once. He was the more ready to do this becuase the rights had become much less valuable, and he had
- indeed the vaguest idea where the wood and river in question were."""
- # 停用词集合
- stop_words = set(stopwords.words('english'))
-
- # 分词
- word_tokens = word_tokenize(text)
-
- filtered_sentence = []
-
- for w in word_tokens:
- if w not in stop_words:
- filtered_sentence.append(w)
-
-
-
- print("\n\nOriginal Sentence \n\n")
- print(" ".join(word_tokens))
-
- print("\n\nFiltered Sentence \n\n")
- print(" ".join(filtered_sentence))
这是我们分词后的句子:
- He determined to drop his litigation with the monastry, and relinguish his claims to the
- wood-cuting and fishery rihgts at once. He was the more ready to do this becuase the rights
- had become much less valuable, and he had indeed the vaguest idea where the wood and river
- in question were.
删除停用词后:
- He determined drop litigation monastry, relinguish claims wood-cuting fishery rihgts. He
- ready becuase rights become much less valuable, indeed vaguest idea wood river question.
请注意,文本的大小几乎减少到一半!你能想象一下删除停用词的用处吗?
2.使用spaCy删除停用词
spaCy是NLP中功能最多,使用最广泛的库之一。我们可以使用SpaCy快速有效地从给定文本中删除停用词。它有一个自己的停用词列表,可以从spacy.lang.en.stop_words类导入。
以下是在Python中使用spaCy删除停用词的方法:
- from spacy.lang.en import English
-
- # 加载英语分词器、标记器、解析器、NER和单词向量
- nlp = English()
-
- text = """He determined to drop his litigation with the monastry, and relinguish his claims to the wood-cuting and
- fishery rihgts at once. He was the more ready to do this becuase the rights had become much less valuable, and he had
- indeed the vaguest idea where the wood and river in question were."""
-
- # "nlp"对象用于创建具有语言注释的文档。
- my_doc = nlp(text)
-
- # 构建词列表
- token_list = []
- for token in my_doc:
- token_list.append(token.text)
-
- from spacy.lang.en.stop_words import STOP_WORDS
-
- # 去除停用词后创建单词列表
- filtered_sentence =[]
-
- for word in token_list:
- lexeme = nlp.vocab[word]
- if lexeme.is_stop == False:
- filtered_sentence.append(word)
- print(token_list)
- print(filtered_sentence)
这是我们在分词后获得的列表:
- He determined to drop his litigation with the monastry and relinguish his claims to the
- wood-cuting and \n fishery rihgts at once. He was the more ready to do this becuase the
- rights had become much less valuable, and he had \n indeed the vaguest idea where the wood
- and river in question were.
删除停用词后的列表:
- determined drop litigation monastry, relinguish claims wood-cuting \n fishery rihgts. ready
- becuase rights become valuable, \n vaguest idea wood river question
需要注意的一点是,去除停用词并不会删除标点符号或换行符,我们需要手动删除它们。
3.使用Gensim删除停用词
Gensim是一个非常方便的库,可以处理NLP任务。在预处理时,gensim也提供了去除停用词的方法。我们可以从类gensim.parsing.preprocessing轻松导入remove_stopwords方法。
尝试使用Gensim去除停用词:
- # 以下代码使用Gensim去除停用词
- from gensim.parsing.preprocessing import remove_stopwords
-
- # pass the sentence in the remove_stopwords function
- result = remove_stopwords("""He determined to drop his litigation with the monastry, and relinguish his claims to the wood-cuting and fishery rihgts at once. He was the more ready to do this becuase the rights had become much less valuable,
- and he had indeed the vaguest idea where the wood and river in question were.""")
-
- print('\n\n Filtered Sentence \n\n')
- print(result)
- He determined drop litigation monastry, relinguish claims wood-cuting fishery rihgts once.
- He ready becuase rights valuable, vaguest idea wood river question were.
使用gensim去除停用词时,我们可以直接在原始文本上进行。在删除停用词之前无需执行分词。这可以节省我们很多时间。
在任何自然语言中,根据情况,可以以多种形式书写或说出单词。这就是语言的精美之处。例如:
Lisa ate the food and washed the dishes.
They were eating noodles at a cafe.
Don’t you want to eat before we leave?
We have just eaten our breakfast.
It also eats fruit and vegetables.
在所有这些句子中,我们可以看到"eat"这个词有多种形式。对我们来说,很容易理解"eat"就是这里具体的活动。所以对我们来说,无论是'eat','ate'还是'eaten'都没关系,因为我们知道发生了什么。
不幸的是,机器并非如此。他们区别对待这些词。因此,我们需要将它们标准化为它们的根词,在我们的例子中是"eat"。
因此,文本标准化是将单词转换为单个规范形式的过程。这可以通过两个过程来实现,即词干化(stemming)和词形还原(lemmatization)。让我们详细了解它们的含义。
词干化和词形还原只是单词的标准化,这意味着将单词缩减为其根形式。
在大多数自然语言中,根词可以有许多变体。例如,"play"一词可以用作"playing","played","plays"等。你可以想到类似的例子(并且有很多)。
词干化
让我们先了解词干化:
词干化是一种文本标准化技术,它通过考虑可以在该词中找到的公共前缀或后缀列表来切断单词的结尾或开头。
这是一个基于规则的基本过程,从单词中删除后缀("ing","ly","es","s"等)
词形还原
另一方面,词形还原是一种结构化的程序,用于获得单词的根形式。它利用了词汇(词汇的字典重要性程度)和形态分析(词汇结构和语法关系)。
让我们考虑以下两句话:
He was driving
He went for a drive
我们可以很容易地说两句话都传达了相同的含义,即过去的驾驶活动。机器将以不同的方式处理两个句子。因此,为了使文本可以理解,我们需要执行词干化或词形还原。
文本标准化的另一个好处是它减少了文本数据中词典的大小。这有助于缩短机器学习模型的训练时间。
我们应该选择哪一个?
词干化算法通过从词中剪切后缀或前缀来工作。词形还原是一种更强大的操作,因为它考虑了词的形态分析。
词形还原返回词根,词根是其所有变形形式的根词。
我们可以说词干化是一种快速但不那么好的方法,可以将词语切割成词根形式,而另一方面,词形还原是一种智能操作,它使用由深入的语言知识创建的词典。因此,词形还原有助于形成更好的效果。
1.使用NLTK进行文本标准化
NLTK库有许多令人惊奇的方法来执行不同的数据预处理步骤。有些方法如PorterStemmer()和WordNetLemmatizer()分别执行词干化和词形还原。
让我们看看他们的实际效果。
词干化
- from nltk.corpus import stopwords
- from nltk.tokenize import word_tokenize
- from nltk.stem import PorterStemmer
-
- set(stopwords.words('english'))
-
- text = """He determined to drop his litigation with the monastry, and relinguish his claims to the wood-cuting and
- fishery rihgts at once. He was the more ready to do this becuase the rights had become much less valuable, and he had
- indeed the vaguest idea where the wood and river in question were."""
-
- stop_words = set(stopwords.words('english'))
-
- word_tokens = word_tokenize(text)
-
- filtered_sentence = []
-
- for w in word_tokens:
- if w not in stop_words:
- filtered_sentence.append(w)
-
- Stem_words = []
- ps =PorterStemmer()
- for w in filtered_sentence:
- rootWord=ps.stem(w)
- Stem_words.append(rootWord)
- print(filtered_sentence)
- print(Stem_words)
- He determined drop litigation monastry, relinguish claims wood-cuting fishery rihgts. He
- ready becuase rights become much less valuable, indeed vaguest idea wood river question.
- He determin drop litig monastri, relinguish claim wood-cut fisheri rihgt. He readi becuas
- right become much less valuabl, inde vaguest idea wood river question.
我们在这里就可以很清晰看到不同点了,我们继续对这段文本执行词形还原
词形还原
- from nltk.corpus import stopwords
- from nltk.tokenize import word_tokenize
- import nltk
- from nltk.stem import WordNetLemmatizer
- set(stopwords.words('english'))
-
- text = """He determined to drop his litigation with the monastry, and relinguish his claims to the wood-cuting and
- fishery rihgts at once. He was the more ready to do this becuase the rights had become much less valuable, and he had
- indeed the vaguest idea where the wood and river in question were."""
-
- stop_words = set(stopwords.words('english'))
-
- word_tokens = word_tokenize(text)
-
- filtered_sentence = []
-
- for w in word_tokens:
- if w not in stop_words:
- filtered_sentence.append(w)
- print(filtered_sentence)
-
- lemma_word = []
- import nltk
- from nltk.stem import WordNetLemmatizer
- wordnet_lemmatizer = WordNetLemmatizer()
- for w in filtered_sentence:
- word1 = wordnet_lemmatizer.lemmatize(w, pos = "n")
- word2 = wordnet_lemmatizer.lemmatize(word1, pos = "v")
- word3 = wordnet_lemmatizer.lemmatize(word2, pos = ("a"))
- lemma_word.append(word3)
- print(lemma_word)
- He determined drop litigation monastry, relinguish claims wood-cuting fishery rihgts. He
- ready becuase rights become much less valuable, indeed vaguest idea wood river question.
- He determined drop litigation monastry, relinguish claim wood-cuting fishery rihgts. He
- ready becuase right become much le valuable, indeed vaguest idea wood river question.
在这里,v表示动词,a代表形容词和n代表名词。该词根提取器(lemmatizer)仅与lemmatize方法的pos参数匹配的词语进行词形还原。
词形还原基于词性标注(POS标记)完成。
2.使用spaCy进行文本标准化
正如我们之前看到的,spaCy是一个优秀的NLP库。它提供了许多工业级方法来执行词形还原。不幸的是,spaCy没有用于词干化(stemming)的方法。要执行词形还原,请查看以下代码:
- #确保使用"python -m spacy download en"下载英语模型
-
- import en_core_web_sm
- nlp = en_core_web_sm.load()
-
- doc = nlp(u"""He determined to drop his litigation with the monastry, and relinguish his claims to the wood-cuting and
- fishery rihgts at once. He was the more ready to do this becuase the rights had become much less valuable, and he had
- indeed the vaguest idea where the wood and river in question were.""")
-
- lemma_word1 = []
- for token in doc:
- lemma_word1.append(token.lemma_)
- lemma_word1
- -PRON- determine to drop -PRON- litigation with the monastry, and relinguish -PRON- claim
- to the wood-cuting and \n fishery rihgts at once. -PRON- be the more ready to do this
- becuase the right have become much less valuable, and -PRON- have \n indeed the vague idea
- where the wood and river in question be.
这里-PRON-是代词的符号,可以使用正则表达式轻松删除。spaCy的好处是我们不必传递任何pos参数来执行词形还原。
3.使用TextBlob进行文本标准化
TextBlob是一个专门用于预处理文本数据的Python库。它基于NLTK库。我们可以使用TextBlob来执行词形还原。但是,TextBlob中没有用于词干化的模块。
那么让我们看看如何在Python中使用TextBlob执行词形还原:
- # from textblob lib import Word method
- from textblob import Word
-
- text = """He determined to drop his litigation with the monastry, and relinguish his claims to the wood-cuting and
- fishery rihgts at once. He was the more ready to do this becuase the rights had become much less valuable, and he had
- indeed the vaguest idea where the wood and river in question were."""
-
- lem = []
- for i in text.split():
- word1 = Word(i).lemmatize("n")
- word2 = Word(word1).lemmatize("v")
- word3 = Word(word2).lemmatize("a")
- lem.append(Word(word3).lemmatize())
- print(lem)
- He determine to drop his litigation with the monastry, and relinguish his claim to the
- wood-cuting and fishery rihgts at once. He wa the more ready to do this becuase the right
- have become much le valuable, and he have indeed the vague idea where the wood and river
- in question were.
就像我们在NLTK小节中看到的那样,TextBlob也使用POS标记来执行词形还原。
停用词在情绪分析,问答系统等问题中反而起着重要作用。这就是为什么删除停用词可能会严重影响我们模型的准确性。
欢迎关注磐创博客资源汇总站:
http://docs.panchuang.net/
欢迎关注PyTorch官方中文教程站:
http://pytorch.panchuang.net/
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。