当前位置:   article > 正文

【自然语言】使用词袋模型,TF-IDF模型和Word2Vec模型进行文本向量化

【自然语言】使用词袋模型,TF-IDF模型和Word2Vec模型进行文本向量化

一、任务目标

python代码写将 HarryPorter 电子书作为语料库,分别使用词袋模型,TF-IDF模型和Word2Vec模型进行文本向量化。

1. 首先将数据预处理,Word2Vec 训练时要求考虑每个单词前后的五个词汇,地址为

作为其上下文 ,生成的向量维度为50维

2.分别搜索 courtroom 和 wizard 这两个词语义最近的5个单词

3.对wizard 和witch 这两个单词在二维平面上进行可视化

二、代码部分

  1. nltk.download('punkt')
  2. nltk.download('stopwords')

 

  1. from nltk.corpus import stopwords
  2. from nltk.tokenize import word_tokenize
  3. from gensim.models import Word2Vec
  4. from gensim.models import TfidfModel
  5. from gensim.corpora import Dictionary
  6. import matplotlib.pyplot as plt
  7. # 导入停用词
  8. stop_words = set(stopwords.words('english'))
  9. # 加载数据
  10. corpus_file = '/Users/zhengyawen/Downloads/HarryPorter.txt'
  11. with open(corpus_file, 'r', encoding='utf-8') as file:
  12. data = file.read()
  13. # 预处理数据
  14. sentences = [word_tokenize(sentence.lower()) for sentence in data.split('.')]
  15. preprocessed_sentences = []
  16. for sentence in sentences:
  17. valid_words = []
  18. for word in sentence:
  19. if word.isalpha() and word not in stop_words:
  20. valid_words.append(word)
  21. preprocessed_sentences.append(valid_words)
  22. # 构建Word2Vec模型
  23. w2v_model = Word2Vec(sentences=preprocessed_sentences, vector_size=50, window=5, min_count=1, sg=0)
  24. # 获取单词向量
  25. vector_courtroom = w2v_model.wv['courtroom']
  26. vector_wizard = w2v_model.wv['wizard']
  27. # 搜索与“courtroom”和“wizard”最相似的5个单词
  28. similar_words_courtroom = w2v_model.wv.most_similar('courtroom', topn=5)
  29. similar_words_wizard = w2v_model.wv.most_similar('wizard', topn=5)
  30. print("Word2Vec模型:")
  31. print("单词 courtroom 的向量:", vector_courtroom)
  32. print("单词 wizard 的向量:", vector_wizard)
  33. print("语义最近的5个单词 (courtroom):")
  34. for word, similarity in similar_words_courtroom:
  35. print(f"{word}: {similarity}")
  36. print("\n语义最近的5个单词 (wizard):")
  37. for word, similarity in similar_words_wizard:
  38. print(f"{word}: {similarity}")
  39. # 构建词袋模型
  40. dictionary = Dictionary(preprocessed_sentences)
  41. corpus = [dictionary.doc2bow(sentence) for sentence in preprocessed_sentences]
  42. tfidf_model = TfidfModel(corpus)
  43. corpus_tfidf = tfidf_model[corpus]
  44. # 可视化Word2Vec模型中wizard和witch的向量
  45. words_to_plot = ['wizard', 'witch']
  46. word_vectors = [w2v_model.wv[word] for word in words_to_plot]
  47. # 可视化
  48. plt.figure(figsize=(10, 6))
  49. for i, word in enumerate(words_to_plot):
  50. plt.scatter(word_vectors[i][0], word_vectors[i][1], label=word)
  51. plt.xlabel('Dimension 1')
  52. plt.ylabel('Dimension 2')
  53. plt.title('Visualization of Word Vectors')
  54. plt.legend()
  55. plt.show()

三、代码运行结果


 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/400115
推荐阅读
相关标签
  

闽ICP备14008679号