当前位置:   article > 正文

spacy 英文模型下载_英语文本处理工具库2 — spaCy

下载spacy英文模块

网易云课堂AI工程师(自然语言处理)学习笔记,接上一篇《英文文本处理工具库1 — NLTK》。

1. spaCy简介

c767aaa861dc?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation

spaCyspaCy是Python和Cython中的高级自然语言处理库,它建立在最新的研究基础之上,从一开始就设计用于实际产品。

spaCy 带有预先训练的统计模型和单词向量,目前支持 34+语言的标记(暂不支持中文)。它具有世界上速度最快的句法分析器,用于标签的卷积神经网络模型,解析和命名实体识别以及与深度学习整合。

2. spaCy与NLTK的对比

c767aaa861dc?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation

image.png

3. spaCy安装

windows + Anoconda环境,使用conda命令安装比较方便:

conda config --add channels conda-forge

conda install spacy

python -m spacy download en

4. spaCy基本操作

(1)英文Tokenization(标记化/分词)

import spacy

nlp = spacy.load('en')

doc = nlp('Hello! My name is LittleTree!')

print("分词如下:")

for token in doc:

print(token.text)

print("\n断句如下:")

for sent in doc.sents:

print(sent)

c767aaa861dc?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation

输出

每个token对象有着非常丰富的属性,如下的方式可以取出其中的部分属性。

doc = nlp("Next week I'll be in SZ.")

for token in doc:

print("{0}\t{1}\t{2}\t{3}\t{4}\t{5}\t{6}\t{7}".format(

token.text,

token.idx,

token.lemma_,

token.is_punct,

token.is_space,

token.shape_,

token.pos_,

token.tag_

))

c767aaa861dc?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation

输出

(2)词性标注

doc = nlp("Next week I'll be in Shanghai.")

print([(token.text, token.tag_) for token in doc])

输出:

[('Next', 'JJ'), ('week', 'NN'), ('I', 'PRP'), ("'ll", 'MD'), ('be', 'VB'), ('in', 'IN'), ('Shanghai', 'NNP'), ('.', '.')]

(3)命名实体识别

doc = nlp("Next week I'll be in Shanghai.")

for ent in doc.ents:

print(ent.text, ent.label_)

输出:

Next week DATE

Shanghai GPE

还可以用非常漂亮的可视化做显示:

from spacy import displacy

displacy.render(doc, style='ent', jupyter=True)

c767aaa861dc?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation

输出

(4)chunking/组块分析

spaCy可以自动检测名词短语,并输出根(root)词,比如下面的"Journal","piece","currencies".

doc = nlp("Wall Street Journal just published an interesting piece on crypto currencies")

for chunk in doc.noun_chunks:

print(chunk.text, chunk.label_, chunk.root.text)

c767aaa861dc?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation

输出

(5)句法依存解析

doc = nlp('Wall Street Journal just published an interesting piece on crypto currencies')

for token in doc:

print("{0}/{1}

token.text, token.tag_, token.dep_, token.head.text, token.head.tag_))

c767aaa861dc?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation

输出

(6)词向量使用

NLP中有一个非常强大的文本表示学习方法叫做word2vec,通过词的上下文学习到词语的稠密向量化表示,同时在这个表示形态下,语义相关的词在向量空间中会比较接近。也有类似v(爷爷)-v(奶奶) ≈ v(男人)-v(女人)的关系。

在spaCy中,要使用英文的词向量,需先下载预先训练好的结果,终端命令如下:

python3 -m spacy download en_core_web_lg

下面我们使用词向量来做一些有趣的事情。

nlp = spacy.load('en_core_web_lg')

from scipy import spatial

# 余弦相似度计算

cosine_similarity = lambda x, y: 1 - spatial.distance.cosine(x, y)

# 男人、女人、国王、女王 的词向量

man = nlp.vocab['man'].vector

woman = nlp.vocab['woman'].vector

queen = nlp.vocab['queen'].vector

king = nlp.vocab['king'].vector

# 我们对向量做一个简单的计算,"man" - "woman" + "queen"

maybe_king = man - woman + queen

computed_similarities = []

# 扫描整个词库的词向量做比对,召回最接近的词向量

for word in nlp.vocab:

if not word.has_vector:

continue

similarity = cosine_similarity(maybe_king, word.vector)

computed_similarities.append((word, similarity))

# 排序与最接近结果展示

computed_similarities = sorted(computed_similarities, key=lambda item: -item[1])

print([w[0].text for w in computed_similarities[:10]])

输出:

['Queen', 'QUEEN', 'queen', 'King', 'KING', 'king', 'KIng', 'Kings', 'KINGS', 'kings']

(7)词汇与文本相似度

在词向量的基础上,spaCy提供了从词到文档的相似度计算的方法,下面的例子是它的使用方法。

# 词汇语义相似度(关联性)

banana = nlp.vocab['banana']

dog = nlp.vocab['dog']

fruit = nlp.vocab['fruit']

animal = nlp.vocab['animal']

print(dog.similarity(animal), dog.similarity(fruit)) # 0.6618534 0.23552845

print(banana.similarity(fruit), banana.similarity(animal)) # 0.67148364 0.2427285

# 文本语义相似度(关联性)

target = nlp("Cats are beautiful animals.")

doc1 = nlp("Dogs are awesome.")

doc2 = nlp("Some gorgeous creatures are felines.")

doc3 = nlp("Dolphins are swimming mammals.")

print(target.similarity(doc1)) # 0.8901765218466683

print(target.similarity(doc2)) # 0.9115828449161616

print(target.similarity(doc3)) # 0.7822956752876101

持续更新中,要不要点个小❤❤鼓励鼓励我(✿◡‿◡)

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/花生_TL007/article/detail/426047
推荐阅读
相关标签
  

闽ICP备14008679号