当前位置:   article > 正文

Python的AI案例(一):使用jieba分词与nltk自然语言分析唐诗作者_nltk生成藏头诗

nltk生成藏头诗
  1. import jieba
  2. from nltk.classify import NaiveBayesClassifier
  3. # 需要提前把李白的诗收集一下,放在libai.txt文本中。
  4. text1 = open(r"libai.txt", "rb").read()
  5. list1 = jieba.cut(text1)
  6. result1 = " ".join(list1)
  7. # 需要提前把杜甫的诗收集一下,放在dufu.txt文本中。
  8. text2 = open(r"dufu.txt", "rb").read()
  9. list2 = jieba.cut(text2)
  10. result2 = " ".join(list2)
  11. # 数据准备
  12. libai = result1
  13. dufu = result2
  14. # 特征提取
  15. def word_feats(words):
  16. return dict([(word, True) for word in words])
  17. libai_features = [(word_feats(lb), 'lb') for lb in libai]
  18. dufu_features = [(word_feats(df), 'df') for df in dufu]
  19. train_set = libai_features + dufu_features
  20. # 训练决策
  21. classifier = NaiveBayesClassifier.train(train_set)
  22. # 分析测试
  23. sentence = input("请输入一句你喜欢的诗:")
  24. print("\n")
  25. seg_list = jieba.cut(sentence)
  26. result1 = " ".join(seg_list)
  27. words = result1.split(" "
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/花生_TL007/article/detail/426084
推荐阅读
相关标签
  

闽ICP备14008679号