当前位置:   article > 正文

NLTK中英文分词下载配置与实例学习_nltk punkt下载

nltk punkt下载

根据最近jieba分词的使用情况,对于英文句子的分词方式,整体感觉不如NLTK,因此通过本文具体记录NLTK如何安装,配置和基本的案例学习,实现了分词、词性标注、NER、句法分析、用户自定义分词方式等功能,供参考;同时由于nltk_data资源太大,下载速度太慢,这里也提供给大家,供研究使用。

1.安装NLTK,命令:pip install nltk

2.下载:直接从这里下载nltk_data资源,地址:https://download.csdn.net/download/hhue2007/86912857

3.解压:解压到当前文件夹后,将nltk_data拷贝到nltk能够识别到的任意一个路径下。

4.nltk能够识别的路径查找方法如下:

import nltk

print(nltk.find('.'))

5.进入nltk_data\tokenizers目录,解压punkt到当前文件夹

6.开始正常使用。NLTK的典型示例如下:

  1. #coding:utf8
  2. """
  3. Description:nltk分词操作详解
  4. Author:hh by 2022-10-31
  5. Prompt: code in Python3 env
  6. Install:pip install nltk
  7. """
  8. import nltk
  9. from nltk.tokenize import MWETokenizer
  10. from nltk.corpus import stopwords
  11. from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters
  12. # print(nltk.find('.'))
  13. def cut_sentences_en(content):
  14. punkt_param = PunktParameters()
  15. abbreviation = ['i.e.', 'dr', 'vs', 'mr', 'mrs', 'prof', 'inc','99/22','North China Sea'] # 自定义的词典
  16. punkt_param.abbrev_types = set(abbreviation)
  17. tokenizer = PunktSentenceTokenizer(punkt_param)
  18. tokenizer._params.abbrev_types
  19. sentences = tokenizer.tokenize(content)
  20. return sentences
  21. if __name__=='__main__':
  22. #********************1 nltk中文分词基本操作***********************************
  23. print('='*40)
  24. str="Geological Final Well Report Well LH35-13-1 Block 99/22,我来自北京大学"
  25. token_list=nltk.word_tokenize(str)
  26. print("\n1.分词: ", "$ ".join(token_list))
  27. taged_list=nltk.pos_tag(token_list)
  28. print("\n2.词性标注: ", taged_list)
  29. #NER需要利用词性标注的结果
  30. ners = nltk.ne_chunk(taged_list)
  31. print("\n3.实体识别(NER): ", ners)
  32. entity_list=nltk.chunk.ne_chunk(taged_list)
  33. print("\n4.句法分析: ", entity_list)
  34. str2='Geological Final Well Report Well LH35-13-1 Block 99/22, North China Sea,LH35-13-1 GEOLOGICAL COMPOSITE LOG,Geological Final Well Report'
  35. tokenized_string = nltk.word_tokenize(str2)
  36. mwe = [('Geological','Final','Well','Report','Well'),('99/','22'),('North','China','Sea'),('GEOLOGICAL','COMPOSITE','LOG')] # 添加这个短语(phrase)
  37. mwe_tokenizer = nltk.tokenize.MWETokenizer(mwe)
  38. result = mwe_tokenizer.tokenize(tokenized_string)
  39. print("\n5.用户自定义分词: " + "/ ".join(result))
  40. print('\n6.用户自定义分词(简称):',cut_sentences_en(str2))

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/花生_TL007/article/detail/351246
推荐阅读
相关标签
  

闽ICP备14008679号