当前位置:   article > 正文

spacy简单使用

spacy

spacy官方: Install spaCy · spaCy Usage Documentation

目录

简介:

一、安装

1. 训练模型

二、功能

1. 分句 (sentencizer)

2.分词 (Tokenization)

3.词性标注 (Part-of-speech tagging)

4.识别停用词 (Stop words)

5.命名实体识别 (Named Entity Recognization)

6.依存分析 (Dependency Parsing)

7.词性还原 (Lemmatization)

8.提取名词短语 (Noun Chunks)

9.指代消解 (Coreference Resolution)

三、可视化


简介:

spacy 可以用于进行分词,命名实体识别,词性识别等等

一、安装

pip install spacy

1. 训练模型

安装之后还要下载官方的训练模型, 不同的语言有不同的训练模型,这里只用对应中文的模型演示:

python -m spacy download zh_core_web_sm

代码中使用:

  1. import spacy
  2. nlp = spacy.load("zh_core_web_sm")

模型官方文档:

Trained Models & Pipelines · spaCy Models Documentation

每种语言也会有几种不同的模型,例如中文的模型除了刚才下载的 zh_core_web_sm ,还有zh_core_web_trf、zh_core_web_md 等,它们的区别在于准确度和体积大小, zh_core_web_sm 体积小,准确度相比zh_core_web_trf差,zh_core_web_trf相对就体积大。这样可以适应不同场景。

这里以模型 zh_core_web_sm 做一个介绍  

Trained Models & Pipelines · spaCy Models Documentation

  • LANGUAGE : 那种语言的模型
  • SIZE : 模型体积的大小
  • COMPONENTS(组件): 模型具备的功能

                tok2vec: 分词

                tagger: 词性标注

                parser: 依存分析

                senter: 分句

                ner: 命名实体识别

                attribute_ruler: 更改属性映射(没有具体了解)

  • PIPELINE (管道) :  管道组件, 模型会按照管道组件去执行

        

模型会中指明包含哪些词性、依存分析、实体种类:

 这是一些词性名称的解释:

IP:简单从句 
NP:名词短语 
VP:动词短语 
PU:断句符,通常是句号、问号、感叹号等标点符号 
LCP:方位词短语 
PP:介词短语 
CP:由‘的’构成的表示修饰性关系的短语 
DNP:由‘的’构成的表示所属关系的短语 
ADVP:副词短语 
ADJP:形容词短语 
DP:限定词短语 
QP:量词短语 
NN:常用名词 
NR:固有名词 
NT:时间名词 
PN:代词 
VV:动词 
VC:是 
CC:表示连词 
VE:有 
VA:表语形容词 
AS:内容标记(如:了) 
VRD:动补复合词 
CD: 表示基数词 
DT: determiner 表示限定词 
EX: existential there 存在句 
FW: foreign word 外来词 
IN: preposition or conjunction, subordinating 介词或从属连词 
JJ: adjective or numeral, ordinal 形容词或序数词 
JJR: adjective, comparative 形容词比较级 
JJS: adjective, superlative 形容词最高级 
LS: list item marker 列表标识 
MD: modal auxiliary 情态助动词 
PDT: pre-determiner 前位限定词 
POS: genitive marker 所有格标记 
PRP: pronoun, personal 人称代词 
RB: adverb 副词 
RBR: adverb, comparative 副词比较级 
RBS: adverb, superlative 副词最高级 
RP: particle 小品词 
SYM: symbol 符号 
TO:”to” as preposition or infinitive marker 作为介词或不定式标记 
WDT: WH-determiner WH限定词 
WP: WH-pronoun WH代词 
WP$: WH-pronoun, possessive WH所有格代词 
WRB:Wh-adverb WH副词

官方关于词性、依存关系、实体的名词解释:

  1. def explain(term):
  2. """Get a description for a given POS tag, dependency label or entity type.
  3. term (str): The term to explain.
  4. RETURNS (str): The explanation, or `None` if not found in the glossary.
  5. EXAMPLE:
  6. >>> spacy.explain(u'NORP')
  7. >>> doc = nlp(u'Hello world')
  8. >>> print([w.text, w.tag_, spacy.explain(w.tag_) for w in doc])
  9. """
  10. if term in GLOSSARY:
  11. return GLOSSARY[term]
  12. GLOSSARY = {
  13. # POS tags
  14. # Universal POS Tags
  15. # http://universaldependencies.org/u/pos/
  16. "ADJ": "adjective",
  17. "ADP": "adposition",
  18. "ADV": "adverb",
  19. "AUX": "auxiliary",
  20. "CONJ": "conjunction",
  21. "CCONJ": "coordinating conjunction",
  22. "DET": "determiner",
  23. "INTJ": "interjection",
  24. "NOUN": "noun",
  25. "NUM": "numeral",
  26. "PART": "particle",
  27. "PRON": "pronoun",
  28. "PROPN": "proper noun",
  29. "PUNCT": "punctuation",
  30. "SCONJ": "subordinating conjunction",
  31. "SYM": "symbol",
  32. "VERB": "verb",
  33. "X": "other",
  34. "EOL": "end of line",
  35. "SPACE": "space",
  36. # POS tags (English)
  37. # OntoNotes 5 / Penn Treebank
  38. # https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
  39. ".": "punctuation mark, sentence closer",
  40. ",": "punctuation mark, comma",
  41. "-LRB-": "left round bracket",
  42. "-RRB-": "right round bracket",
  43. "``": "opening quotation mark",
  44. '""': "closing quotation mark",
  45. "''": "closing quotation mark",
  46. ":": "punctuation mark, colon or ellipsis",
  47. "$": "symbol, currency",
  48. "#": "symbol, number sign",
  49. "AFX": "affix",
  50. "CC": "conjunction, coordinating",
  51. "CD": "cardinal number",
  52. "DT": "determiner",
  53. "EX": "existential there",
  54. "FW": "foreign word",
  55. "HYPH": "punctuation mark, hyphen",
  56. "IN": "conjunction, subordinating or preposition",
  57. "JJ": "adjective (English), other noun-modifier (Chinese)",
  58. "JJR": "adjective, comparative",
  59. "JJS": "adjective, superlative",
  60. "LS": "list item marker",
  61. "MD": "verb, modal auxiliary",
  62. "NIL": "missing tag",
  63. "NN": "noun, singular or mass",
  64. "NNP": "noun, proper singular",
  65. "NNPS": "noun, proper plural",
  66. "NNS": "noun, plural",
  67. "PDT": "predeterminer",
  68. "POS": "possessive ending",
  69. "PRP": "pronoun, personal",
  70. "PRP$": "pronoun, possessive",
  71. "RB": "adverb",
  72. "RBR": "adverb, comparative",
  73. "RBS": "adverb, superlative",
  74. "RP": "adverb, particle",
  75. "TO": 'infinitival "to"',
  76. "UH": "interjection",
  77. "VB": "verb, base form",
  78. "VBD": "verb, past tense",
  79. "VBG": "verb, gerund or present participle",
  80. "VBN": "verb, past participle",
  81. "VBP": "verb, non-3rd person singular present",
  82. "VBZ": "verb, 3rd person singular present",
  83. "WDT": "wh-determiner",
  84. "WP": "wh-pronoun, personal",
  85. "WP$": "wh-pronoun, possessive",
  86. "WRB": "wh-adverb",
  87. "SP": "space (English), sentence-final particle (Chinese)",
  88. "ADD": "email",
  89. "NFP": "superfluous punctuation",
  90. "GW": "additional word in multi-word expression",
  91. "XX": "unknown",
  92. "BES": 'auxiliary "be"',
  93. "HVS": 'forms of "have"',
  94. "_SP": "whitespace",
  95. # POS Tags (German)
  96. # TIGER Treebank
  97. # http://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/TIGERCorpus/annotation/tiger_introduction.pdf
  98. "$(": "other sentence-internal punctuation mark",
  99. "$,": "comma",
  100. "$.": "sentence-final punctuation mark",
  101. "ADJA": "adjective, attributive",
  102. "ADJD": "adjective, adverbial or predicative",
  103. "APPO": "postposition",
  104. "APPR": "preposition; circumposition left",
  105. "APPRART": "preposition with article",
  106. "APZR": "circumposition right",
  107. "ART": "definite or indefinite article",
  108. "CARD": "cardinal number",
  109. "FM": "foreign language material",
  110. "ITJ": "interjection",
  111. "KOKOM": "comparative conjunction",
  112. "KON": "coordinate conjunction",
  113. "KOUI": 'subordinate conjunction with "zu" and infinitive',
  114. "KOUS": "subordinate conjunction with sentence",
  115. "NE": "proper noun",
  116. "NNE": "proper noun",
  117. "PAV": "pronominal adverb",
  118. "PROAV": "pronominal adverb",
  119. "PDAT": "attributive demonstrative pronoun",
  120. "PDS": "substituting demonstrative pronoun",
  121. "PIAT": "attributive indefinite pronoun without determiner",
  122. "PIDAT": "attributive indefinite pronoun with determiner",
  123. "PIS": "substituting indefinite pronoun",
  124. "PPER": "non-reflexive personal pronoun",
  125. "PPOSAT": "attributive possessive pronoun",
  126. "PPOSS": "substituting possessive pronoun",
  127. "PRELAT": "attributive relative pronoun",
  128. "PRELS": "substituting relative pronoun",
  129. "PRF": "reflexive personal pronoun",
  130. "PTKA": "particle with adjective or adverb",
  131. "PTKANT": "answer particle",
  132. "PTKNEG": "negative particle",
  133. "PTKVZ": "separable verbal particle",
  134. "PTKZU": '"zu" before infinitive',
  135. "PWAT": "attributive interrogative pronoun",
  136. "PWAV": "adverbial interrogative or relative pronoun",
  137. "PWS": "substituting interrogative pronoun",
  138. "TRUNC": "word remnant",
  139. "VAFIN": "finite verb, auxiliary",
  140. "VAIMP": "imperative, auxiliary",
  141. "VAINF": "infinitive, auxiliary",
  142. "VAPP": "perfect participle, auxiliary",
  143. "VMFIN": "finite verb, modal",
  144. "VMINF": "infinitive, modal",
  145. "VMPP": "perfect participle, modal",
  146. "VVFIN": "finite verb, full",
  147. "VVIMP": "imperative, full",
  148. "VVINF": "infinitive, full",
  149. "VVIZU": 'infinitive with "zu", full',
  150. "VVPP": "perfect participle, full",
  151. "XY": "non-word containing non-letter",
  152. # POS Tags (Chinese)
  153. # OntoNotes / Chinese Penn Treebank
  154. # https://repository.upenn.edu/cgi/viewcontent.cgi?article=1039&context=ircs_reports
  155. "AD": "adverb",
  156. "AS": "aspect marker",
  157. "BA": "把 in ba-construction",
  158. # "CD": "cardinal number",
  159. "CS": "subordinating conjunction",
  160. "DEC": "的 in a relative clause",
  161. "DEG": "associative 的",
  162. "DER": "得 in V-de const. and V-de-R",
  163. "DEV": "地 before VP",
  164. "ETC": "for words 等, 等等",
  165. # "FW": "foreign words"
  166. "IJ": "interjection",
  167. # "JJ": "other noun-modifier",
  168. "LB": "被 in long bei-const",
  169. "LC": "localizer",
  170. "M": "measure word",
  171. "MSP": "other particle",
  172. # "NN": "common noun",
  173. "NR": "proper noun",
  174. "NT": "temporal noun",
  175. "OD": "ordinal number",
  176. "ON": "onomatopoeia",
  177. "P": "preposition excluding 把 and 被",
  178. "PN": "pronoun",
  179. "PU": "punctuation",
  180. "SB": "被 in short bei-const",
  181. # "SP": "sentence-final particle",
  182. "VA": "predicative adjective",
  183. "VC": "是 (copula)",
  184. "VE": "有 as the main verb",
  185. "VV": "other verb",
  186. # Noun chunks
  187. "NP": "noun phrase",
  188. "PP": "prepositional phrase",
  189. "VP": "verb phrase",
  190. "ADVP": "adverb phrase",
  191. "ADJP": "adjective phrase",
  192. "SBAR": "subordinating conjunction",
  193. "PRT": "particle",
  194. "PNP": "prepositional noun phrase",
  195. # Dependency Labels (English)
  196. # ClearNLP / Universal Dependencies
  197. # https://github.com/clir/clearnlp-guidelines/blob/master/md/specifications/dependency_labels.md
  198. "acl": "clausal modifier of noun (adjectival clause)",
  199. "acomp": "adjectival complement",
  200. "advcl": "adverbial clause modifier",
  201. "advmod": "adverbial modifier",
  202. "agent": "agent",
  203. "amod": "adjectival modifier",
  204. "appos": "appositional modifier",
  205. "attr": "attribute",
  206. "aux": "auxiliary",
  207. "auxpass": "auxiliary (passive)",
  208. "case": "case marking",
  209. "cc": "coordinating conjunction",
  210. "ccomp": "clausal complement",
  211. "clf": "classifier",
  212. "complm": "complementizer",
  213. "compound": "compound",
  214. "conj": "conjunct",
  215. "cop": "copula",
  216. "csubj": "clausal subject",
  217. "csubjpass": "clausal subject (passive)",
  218. "dative": "dative",
  219. "dep": "unclassified dependent",
  220. "det": "determiner",
  221. "discourse": "discourse element",
  222. "dislocated": "dislocated elements",
  223. "dobj": "direct object",
  224. "expl": "expletive",
  225. "fixed": "fixed multiword expression",
  226. "flat": "flat multiword expression",
  227. "goeswith": "goes with",
  228. "hmod": "modifier in hyphenation",
  229. "hyph": "hyphen",
  230. "infmod": "infinitival modifier",
  231. "intj": "interjection",
  232. "iobj": "indirect object",
  233. "list": "list",
  234. "mark": "marker",
  235. "meta": "meta modifier",
  236. "neg": "negation modifier",
  237. "nmod": "modifier of nominal",
  238. "nn": "noun compound modifier",
  239. "npadvmod": "noun phrase as adverbial modifier",
  240. "nsubj": "nominal subject",
  241. "nsubjpass": "nominal subject (passive)",
  242. "nounmod": "modifier of nominal",
  243. "npmod": "noun phrase as adverbial modifier",
  244. "num": "number modifier",
  245. "number": "number compound modifier",
  246. "nummod": "numeric modifier",
  247. "oprd": "object predicate",
  248. "obj": "object",
  249. "obl": "oblique nominal",
  250. "orphan": "orphan",
  251. "parataxis": "parataxis",
  252. "partmod": "participal modifier",
  253. "pcomp": "complement of preposition",
  254. "pobj": "object of preposition",
  255. "poss": "possession modifier",
  256. "possessive": "possessive modifier",
  257. "preconj": "pre-correlative conjunction",
  258. "prep": "prepositional modifier",
  259. "prt": "particle",
  260. "punct": "punctuation",
  261. "quantmod": "modifier of quantifier",
  262. "rcmod": "relative clause modifier",
  263. "relcl": "relative clause modifier",
  264. "reparandum": "overridden disfluency",
  265. "root": "root",
  266. "vocative": "vocative",
  267. "xcomp": "open clausal complement",
  268. # Dependency labels (German)
  269. # TIGER Treebank
  270. # http://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/TIGERCorpus/annotation/tiger_introduction.pdf
  271. # currently missing: 'cc' (comparative complement) because of conflict
  272. # with English labels
  273. "ac": "adpositional case marker",
  274. "adc": "adjective component",
  275. "ag": "genitive attribute",
  276. "ams": "measure argument of adjective",
  277. "app": "apposition",
  278. "avc": "adverbial phrase component",
  279. "cd": "coordinating conjunction",
  280. "cj": "conjunct",
  281. "cm": "comparative conjunction",
  282. "cp": "complementizer",
  283. "cvc": "collocational verb construction",
  284. "da": "dative",
  285. "dh": "discourse-level head",
  286. "dm": "discourse marker",
  287. "ep": "expletive es",
  288. "hd": "head",
  289. "ju": "junctor",
  290. "mnr": "postnominal modifier",
  291. "mo": "modifier",
  292. "ng": "negation",
  293. "nk": "noun kernel element",
  294. "nmc": "numerical component",
  295. "oa": "accusative object",
  296. "oc": "clausal object",
  297. "og": "genitive object",
  298. "op": "prepositional object",
  299. "par": "parenthetical element",
  300. "pd": "predicate",
  301. "pg": "phrasal genitive",
  302. "ph": "placeholder",
  303. "pm": "morphological particle",
  304. "pnc": "proper noun component",
  305. "rc": "relative clause",
  306. "re": "repeated element",
  307. "rs": "reported speech",
  308. "sb": "subject",
  309. "sbp": "passivized subject (PP)",
  310. "sp": "subject or predicate",
  311. "svp": "separable verb prefix",
  312. "uc": "unit component",
  313. "vo": "vocative",
  314. # Named Entity Recognition
  315. # OntoNotes 5
  316. # https://catalog.ldc.upenn.edu/docs/LDC2013T19/OntoNotes-Release-5.0.pdf
  317. "PERSON": "People, including fictional",
  318. "NORP": "Nationalities or religious or political groups",
  319. "FACILITY": "Buildings, airports, highways, bridges, etc.",
  320. "FAC": "Buildings, airports, highways, bridges, etc.",
  321. "ORG": "Companies, agencies, institutions, etc.",
  322. "GPE": "Countries, cities, states",
  323. "LOC": "Non-GPE locations, mountain ranges, bodies of water",
  324. "PRODUCT": "Objects, vehicles, foods, etc. (not services)",
  325. "EVENT": "Named hurricanes, battles, wars, sports events, etc.",
  326. "WORK_OF_ART": "Titles of books, songs, etc.",
  327. "LAW": "Named documents made into laws.",
  328. "LANGUAGE": "Any named language",
  329. "DATE": "Absolute or relative dates or periods",
  330. "TIME": "Times smaller than a day",
  331. "PERCENT": 'Percentage, including "%"',
  332. "MONEY": "Monetary values, including unit",
  333. "QUANTITY": "Measurements, as of weight or distance",
  334. "ORDINAL": '"first", "second", etc.',
  335. "CARDINAL": "Numerals that do not fall under another type",
  336. # Named Entity Recognition
  337. # Wikipedia
  338. # http://www.sciencedirect.com/science/article/pii/S0004370212000276
  339. # https://pdfs.semanticscholar.org/5744/578cc243d92287f47448870bb426c66cc941.pdf
  340. "PER": "Named person or family.",
  341. "MISC": "Miscellaneous entities, e.g. events, nationalities, products or works of art",
  342. # https://github.com/ltgoslo/norne
  343. "EVT": "Festivals, cultural events, sports events, weather phenomena, wars, etc.",
  344. "PROD": "Product, i.e. artificially produced entities including speeches, radio shows, programming languages, contracts, laws and ideas",
  345. "DRV": "Words (and phrases?) that are dervied from a name, but not a name in themselves, e.g. 'Oslo-mannen' ('the man from Oslo')",
  346. "GPE_LOC": "Geo-political entity, with a locative sense, e.g. 'John lives in Spain'",
  347. "GPE_ORG": "Geo-political entity, with an organisation sense, e.g. 'Spain declined to meet with Belgium'",
  348. }

url: https://github.com/explosion/spaCy/blob/master/spacy/glossary.py

二、功能

  1. import spacy
  2. s = "小米董事长叶凡决定投资华为。在2002年,他还创作了<遮天>。"
  3. nlp = spacy.load("zh_core_web_sm")
  4. doc = nlp(s)

1. 分句 (sentencizer)

  1. # 1. 分句 (sentencizer)
  2. for i in doc.sents:
  3. print(i)
  4. """
  5. 小米董事长叶凡决定投资华为。
  6. 在2002年,他还创作了<遮天>。
  7. """

2.分词 (Tokenization)

  1. # 2. 分词 (Tokenization)
  2. print([w.text for w in doc])
  3. """
  4. ['小米', '董事长', '叶凡', '决定', '投资', '华为', '。', '在', '2002年', ',', '他', '还', '创作', '了', '<遮天>', '。']
  5. """

3.词性标注 (Part-of-speech tagging)

细粒度
  1. print([(w.text, w.tag_) for w in doc])
  2. """
  3. [('小米', 'NR'), ('董事长', 'NN'), ('叶凡', 'NR'), ('决定', 'VV'), ('投资', 'VV'), ('华为', 'NR'), ('。', 'PU'), ('在', 'P'), ('2002年', 'NT'), (',', 'PU'), ('他', 'PN'), ('还', 'AD'), ('创作', 'VV'), ('了', 'AS'), ('<遮天>', 'NN'), ('。', 'PU')]
  4. """

粗粒度

  1. print([(w.text, w.pos_) for w in doc])
  2. """
  3. [('小米', 'PROPN'), ('董事长', 'NOUN'), ('叶凡', 'PROPN'), ('决定', 'VERB'), ('投资', 'VERB'), ('华为', 'PROPN'), ('。', 'PUNCT'), ('在', 'ADP'), ('2002年', 'NOUN'), (',', 'PUNCT'), ('他', 'PRON'), ('还', 'ADV'), ('创作', 'VERB'), ('了', 'PART'), ('<遮天>', 'NOUN'), ('。', 'PUNCT')]
  4. """

4.识别停用词 (Stop words)

  1. print([(w.text, w.is_stop) for w in doc])
  2. """
  3. [('小米', False), ('董事长', False), ('叶凡', False), ('决定', True), ('投资', False), ('华为', False), ('。', True), ('在', True), ('2002年', False), (',', True), ('他', True), ('还', True), ('创作', False), ('了', True), ('<遮天>', False), ('。', True)]
  4. """

5.命名实体识别 (Named Entity Recognization)

  1. # 命名实体识别 (Named Entity Recognization)
  2. print([(e.text, e.label_) for e in doc.ents])
  3. """
  4. [('小米', 'PERSON'), ('叶凡', 'PERSON'), ('2002年', 'DATE')]
  5. """

6.依存分析 (Dependency Parsing)

  1. print([(w.text, w.dep_) for w in doc])
  2. """
  3. [('小米', 'nmod:assmod'), ('董事长', 'appos'), ('叶凡', 'nsubj'), ('决定', 'ROOT'), ('投资', 'ccomp'), ('华为', 'dobj'), ('。', 'punct'), ('在', 'case'), ('2002年', 'nmod:prep'), (',', 'punct'), ('他', 'nsubj'), ('还', 'advmod'), ('创作', 'ROOT'), ('了', 'aux:asp'), ('<遮天>', 'dobj'), ('。', 'punct')]
  4. """

7.词性还原 (Lemmatization)

这个模型没有这个功能,用英文模型演示下

找到单词的原型,即词性还原,将am, is, are, have been 还原成be,复数还原成单数(cats -> cat),过去时态还原成现在时态 (had -> have)

  1. import spacy
  2. nlp = spacy.load('en_core_web_sm')
  3. txt = "A magnetic monopole is a hypothetical elementary particle."
  4. doc = nlp(txt)
  5. lem = [token.lemma_ for token in doc]
  6. print(lem)
  7. """
  8. ['a', 'magnetic', 'monopole', 'be', 'a', 'hypothetical', 'elementary', 'particle', '.']
  9. """

8.提取名词短语 (Noun Chunks)

这个模型没有这个功能,用英文模型演示下

  1. noun_chunks = [nc for nc in doc.noun_chunks]
  2. print(noun_chunks)
  3. """
  4. [A magnetic monopole, a hypothetical elementary particle]
  5. """

9.指代消解 (Coreference Resolution)

指代消解 ,寻找句子中代词 hesheit 所对应的实体。为了使用这个模块,需要使用神经网络预训练的指代消解系数,如果前面没有安装,可运行命令:pip install neuralcoref

这个模型没有这个功能,用英文模型演示下

  1. txt = "My sister has a son and she loves him."
  2. # 将预训练的神经网络指代消解加入到spacy的管道中
  3. import neuralcoref
  4. neuralcoref.add_to_pipe(nlp)
  5. doc = nlp(txt)
  6. doc._.coref_clusters
  7. """
  8. [My sister: [My sister, she], a son: [a son, him]]
  9. """

三、可视化

  1. from spacy import displacy
  2. # 可视化依存关系
  3. html_str = displacy.render(doc, style="dep")
  4. #可视化命名名称实体
  5. # html_str = displacy.render(doc, style="ent")
  6. with open("D:\\data\\ss.html", "w", encoding="utf8") as f:
  7. f.write(html_str)

html_str  是一个html格式的字符串, 保存到本地 ss.html文件,浏览器打开效果:

依存关系

命名实体

 

官方还有一个可视化的库:  spacy-streamlit , 专门用于spacy相关的nlp可视化。

streamlit 也是一个专门可视化的库。

spacy-streamlit 有一个使用demo:


https://share.streamlit.io/ines/spacy-streamlit-demo/app.py

demo对应githup

GitHub - ines/spacy-streamlit-demo

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】
推荐阅读
相关标签
  

闽ICP备14008679号