当前位置:   article > 正文

实践:基于双向LSTM模型完成文本分类任务_双向lstm解决什么任务

双向lstm解决什么任务

                

目录

1 数据处理

        1.1 数据加载

        1.2 构造Dataset类

        1.3 封装DataLoader

2 模型构建

3 模型训练

4 模型评价

5 模型预测

5 拓展实验 

        5.1 使用Pytorch内置的单向LSTM进行文本分类实验

​编辑         5.2 使用Paddle内置的单向LSTM进行文本分类实验

        总结


        电影评论可以蕴含丰富的情感:比如喜欢、讨厌、等等.情感分析(Sentiment Analysis)是为一个文本分类问题,即使用判定给定的一段文本信息表达的情感属于积极情绪,还是消极情绪.

        本实践使用 IMDB 电影评论数据集,使用双向 LSTM 对电影评论进行情感分析.

1 数据处理

        IMDB电影评论数据集是一份关于电影评论的经典二分类数据集.IMDB 按照评分的高低筛选出了积极评论和消极评论,如果评分 $\ge 7$,则认为是积极评论;如果评分 $\le4$,则认为是消极评论.数据集包含训练集和测试集数据,数量各为 25000 条,每条数据都是一段用户关于某个电影的真实评价,以及观众对这个电影的情感倾向,其目录结构如下所示:

        LSTM 模型不能直接处理文本数据,需要先将文本中单词转为向量表示,称为词向量(Word Embedding).为了提高转换效率,通常会事先把文本的每个单词转换为数字 ID,再使用第节中介绍的方法进行向量转换.因此,需要准备一个词典(Vocabulary),将文本中的每个单词转换为它在词典中的序号 ID.同时还要设置一个特殊的词 [UNK],表示未知词.在处理文本时,如果碰到不在词表的词,一律按 [UNK] 处理.

        首先展示一下我都项目目录

        在lstm.py中引入头文件(lstm.py为本次实验的主要文件,extend_1.py和extend2.py为实验书中的扩展部分),并且由于本人经受上一次LSTM一跑半个小时的荼毒,这次我将代码改为了gpu运行

  1. import os
  2. import torch
  3. import torch.nn as nn
  4. from torch.utils.data import Dataset
  5. from utils.data import load_vocab
  6. from functools import partial
  7. import time
  8. import random
  9. import numpy as np
  10. from nndl import Accuracy, RunnerV3
  11. device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

        1.1 数据加载

        原始训练集和测试集数据分别25000条,本节将原始的测试集平均分为两份,分别作为验证集和测试集,存放于./dataset目录下。使用如下代码便可以将数据加载至内存:

  1. def load_imdb_data(path):
  2. assert os.path.exists(path)
  3. trainset, devset, testset = [], [], []
  4. with open(os.path.join(path, "train.txt"), "r", encoding='utf-8') as fr:
  5. for line in fr:
  6. sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
  7. trainset.append((sentence, sentence_label))
  8. with open(os.path.join(path, "dev.txt"), "r", encoding='utf-8') as fr:
  9. for line in fr:
  10. sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
  11. devset.append((sentence, sentence_label))
  12. with open(os.path.join(path, "test.txt"), "r", encoding='utf-8') as fr:
  13. for line in fr:
  14. sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
  15. testset.append((sentence, sentence_label))
  16. return trainset, devset, testset
  17. # 加载IMDB数据集
  18. train_data, dev_data, test_data = load_imdb_data("./dataset/")
  19. # # 打印一下加载后的数据样式
  20. print(train_data[4])

        输出结果如下:

        从输出结果看,加载后的每条样本包含两部分内容:文本串和标签。

        1.2 构造Dataset类

        首先,我们构造IMDBDataset类用于数据管理,它继承自torch.utils.data.DataSet类。

        由于这里的输入是文本序列,需要先将其中的每个词转换为该词在词表中的序号 ID,然后根据词表ID查询这些词对应的词向量,在获得词向量后会将其输入至模型进行后续计算。可以使用IMDBDataset类中的words_to_id方法实现这个功能。 具体而言,利用词表word2id_dict将序列中的每个词映射为对应的数字编号,便于进一步转为为词向量。当序列中的词没有包含在词表时,默认会将该词用[UNK]代替。words_to_id方法利用一个如图6.14所示的哈希表来进行转换。

        代码实现如下:

  1. class IMDBDataset(Dataset):
  2. def __init__(self, examples, word2id_dict):
  3. super(IMDBDataset, self).__init__()
  4. # 词典,用于将单词转为字典索引的数字
  5. self.word2id_dict = word2id_dict
  6. # 加载后的数据集
  7. self.examples = self.words_to_id(examples)
  8. def words_to_id(self, examples):
  9. tmp_examples = []
  10. for idx, example in enumerate(examples):
  11. seq, label = example
  12. # 将单词映射为字典索引的ID, 对于词典中没有的单词用[UNK]对应的ID进行替代
  13. seq = [self.word2id_dict.get(word, self.word2id_dict['[UNK]']) for word in seq.split(" ")]
  14. label = int(label)
  15. tmp_examples.append([seq, label])
  16. return tmp_examples
  17. def __getitem__(self, idx):
  18. seq, label = self.examples[idx]
  19. return seq, label
  20. def __len__(self):
  21. return len(self.examples)
  22. # 加载词表
  23. word2id_dict = load_vocab("./dataset/vocab.txt")
  24. # 实例化Dataset
  25. train_set = IMDBDataset(train_data, word2id_dict)
  26. dev_set = IMDBDataset(dev_data, word2id_dict)
  27. test_set = IMDBDataset(test_data, word2id_dict)
  28. print('训练集样本数:', len(train_set))
  29. print('样本示例:', train_set[4])

          load_vocab函数,位于utils文件夹下的data.py

  1. import os
  2. def load_vocab(path):
  3. assert os.path.exists(path)
  4. words = []
  5. with open(path, "r", encoding="utf-8") as f:
  6. words = f.readlines()
  7. words = [word.strip() for word in words if word.strip()]
  8. word2id = dict(zip(words, range(len(words))))
  9. return word2id

        输出结果如下:

 

        1.3 封装DataLoader

        在构建 Dataset 类之后,我们构造对应的 DataLoader,用于批次数据的迭代.和前几章的 DataLoader 不同,这里的 DataLoader 需要引入下面两个功能:

  1.  长度限制:需要将序列的长度控制在一定的范围内,避免部分数据过长影响整体训练效果
  2. 长度补齐:神经网络模型通常需要同一批处理的数据的序列长度是相同的,然而在分批时通常会将不同长度序列放在同一批,因此需要对序列进行补齐处理.

        对于长度限制,我们使用max_seq_len参数对于过长的文本进行截断.

        对于长度补齐,我们先统计该批数据中序列的最大长度,并将短的序列填充一些没有特殊意义的占位符 [PAD],将长度补齐到该批次的最大长度,这样便能使得同一批次的数据变得规整.比如给定两个句子:

  • 句子1: This movie was craptacular.
  • 句子2: I got stuck in traffic on the way to the theater.

        将上面的两个句子补齐,变为:

  • 句子1: This movie was craptacular [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]
  • 句子2: I got stuck in traffic on the way to the theater

        具体来讲,本节定义了一个collate_fn函数来做数据的截断和填充. 该函数可以作为回调函数传入 DataLoader,DataLoader 在返回一批数据之前,调用该函数去处理数据,并返回处理后的序列数据和对应标签。

        另外,使用[PAD]占位符对短序列填充后,再进行文本分类任务时,默认无须使用[PAD]位置,因此需要使用变量seq_lens来表示序列中非[PAD]位置的真实长度。seq_lens可以在collate_fn函数处理批次数据时进行获取并返回。需要注意的是,由于RunnerV3类默认按照输入数据和标签两类信息获取数据,因此需要将序列数据和序列长度组成元组作为输入数据进行返回,以方便RunnerV3解析数据。

  1. def collate_fn(batch_data, pad_val=0, max_seq_len=256):
  2. seqs, seq_lens, labels = [], [], []
  3. max_len = 0
  4. for example in batch_data:
  5. seq, label = example
  6. # 对数据序列进行截断
  7. seq = seq[:max_seq_len]
  8. # 对数据截断并保存于seqs中
  9. seqs.append(seq)
  10. seq_lens.append(len(seq))
  11. labels.append(label)
  12. # 保存序列最大长度
  13. max_len = max(max_len, len(seq))
  14. # 对数据序列进行填充至最大长度
  15. for i in range(len(seqs)):
  16. seqs[i] = seqs[i] + [pad_val] * (max_len - len(seqs[i]))
  17. # return (torch.tensor(seqs), torch.tensor(seq_lens)), torch.tensor(labels)
  18. return (torch.tensor(seqs).to(device), torch.tensor(seq_lens)), torch.tensor(labels).to(device)

        这里需要将返回的处理后的seqs和labels在后面要作为参数训练,本次训练在gpu上进行,所以要将二者返回时加上.to(device) 

        下面我们自定义一批数据来测试一下collate_fn函数的功能,这里假定一下max_seq_len为5,然后定义序列长度分别为6和3的两条数据,传入collate_fn函数中。

  1. max_seq_len = 5
  2. batch_data = [[[1, 2, 3, 4, 5, 6], 1], [[2, 4, 6], 0]]
  3. (seqs, seq_lens), labels = collate_fn(batch_data, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
  4. print("seqs: ", seqs)
  5. print("seq_lens: ", seq_lens)
  6. print("labels: ", labels)

输出结果如下:

        可以看到,原始序列中长度为6的序列被截断为5,同时原始序列中长度为3的序列被填充到5,同时返回了非`[PAD]`的序列长度。

        接下来,我们将collate_fn作为回调函数传入DataLoader中, 其在返回一批数据时,可以通过collate_fn函数处理该批次的数据。 这里需要注意的是,这里通过partial函数对collate_fn函数中的关键词参数进行设置,并返回一个新的函数对象作为collate_fn。

        在使用DataLoader按批次迭代数据时,最后一批的数据样本数量可能不够设定的batch_size,可以通过参数drop_last来判断是否丢弃最后一个batch的数据。

  1. max_seq_len = 256
  2. batch_size = 128
  3. collate_fn = partial(collate_fn, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
  4. train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size,
  5. shuffle=True, drop_last=False, collate_fn=collate_fn)
  6. dev_loader = torch.utils.data.DataLoader(dev_set, batch_size=batch_size,
  7. shuffle=False, drop_last=False, collate_fn=collate_fn)
  8. test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size,
  9. shuffle=False, drop_last=False, collate_fn=collate_fn)

2 模型构建

本实践的整个模型结构如图

        由如下几部分组成:  

        (1)嵌入层:将输入的数字序列进行向量化,即将每个数字映射为向量。这里直接使用Pytorch API:torch.nn.Embedding来完成。

> class torch.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None, sparse=False, weight_attr=None, name=None)

        该API有两个重要的参数:num_embeddings表示需要用到的Embedding的数量。embedding_dim表示嵌入向量的维度。

        torch.nn.Embedding会根据[num\_embeddings, embedding\_dim]自动构造一个二维嵌入矩阵。参数padding_idx是指用来补齐序列的占位符[PAD]对应的词表ID,那么在训练过程中遇到此ID时,其参数及对应的梯度将会以0进行填充。在实现中为了简单起见,我们通常会将[PAD]放在词表中的第一位,即对应的ID为0。

        (2)双向LSTM层:接收向量序列,分别用前向和反向更新循环单元。这里我们直接使用Pytorch API:torch.nn.LSTM来完成。只需要在定义LSTM时设置参数bidirectional为True便可以直接使用双向LSTM。

> 思考: 在实现双向LSTM时,因为需要进行序列补齐,在计算反向LSTM时,占位符[PAD]是否会对LSTM参数梯度的更新有影响。如果有的话,如何消除影响?

        由于占位符不包含有用的信息,它们在模型的前向传播和反向传播过程中都参与了计算,这可能导致梯度的不准确估计和更新。特别是在长序列中,如果大部分是占位符,那么梯度更新可能会受到极大的干扰。

        为了消除占位符对LSTM参数梯度更新的影响,可以使用填充掩码(padding mask)来标记占位符的位置,并在计算梯度时将其屏蔽掉。填充掩码是一个与输入序列相同形状的二进制张量,其中占位符位置对应的元素值为0,而其他位置对应的元素值为1

        在计算损失函数时,可以通过将填充掩码损失函数进行点乘操作将占位符位置的梯度置零,从而忽略这些位置的梯度更新。这样,只有真实有效的位置参与了梯度的计算和参数的更新。

        注:在调用torch.nn.LSTM实现双向LSTM时,可以传入该批次数据的真实长度,torch.nn.LSTM会根据真实序列长度处理数据,对占位符[PAD]进行掩蔽,[PAD]位置将返回零向量。

        (3)聚合层:将双向LSTM层所有位置上的隐状态进行平均,作为整个句子的表示。

        (4)输出层:输出层,输出分类的几率。这里可以直接调用torch.nn.Linear来完成。

 汇聚层算子

        汇聚层算子将双向LSTM层所有位置上的隐状态进行平均,作为整个句子的表示。这里我们实现了AveragePooling算子进行隐状态的汇聚,首先利用序列长度向量生成掩码(Mask)矩阵,用于对文本序列中[PAD]位置的向量进行掩蔽,然后将该序列的向量进行相加后取均值。代码实现如下:

        将上面各个模块汇总到一起,代码实现如下:

  1. class AveragePooling(nn.Module):
  2. def __init__(self):
  3. super(AveragePooling, self).__init__()
  4. def forward(self, sequence_output, sequence_length):
  5. # 假设 sequence_length 是一个 PyTorch 张量
  6. sequence_length = sequence_length.unsqueeze(-1).to(torch.float32)
  7. # 根据sequence_length生成mask矩阵,用于对Padding位置的信息进行mask
  8. max_len = sequence_output.shape[1]
  9. mask = torch.arange(max_len, device='cuda') < sequence_length.to('cuda')
  10. mask = mask.to(torch.float32).unsqueeze(-1)
  11. # 对序列中paddling部分进行mask
  12. sequence_output = torch.multiply(sequence_output, mask.to('cuda'))
  13. # 对序列中的向量取均值
  14. batch_mean_hidden = torch.divide(torch.sum(sequence_output, dim=1), sequence_length.to('cuda'))
  15. return batch_mean_hidden

模型汇总

将上面的算子汇总,组合为最终的分类模型。代码实现如下:

  1. class Model_BiLSTM_FC(nn.Module):
  2. def __init__(self, num_embeddings, input_size, hidden_size, num_classes=2):
  3. super(Model_BiLSTM_FC, self).__init__()
  4. # 词典大小
  5. self.num_embeddings = num_embeddings
  6. # 单词向量的维度
  7. self.input_size = input_size
  8. # LSTM隐藏单元数量
  9. self.hidden_size = hidden_size
  10. # 情感分类类别数量
  11. self.num_classes = num_classes
  12. # 实例化嵌入层
  13. self.embedding_layer = nn.Embedding(num_embeddings, input_size, padding_idx=0)
  14. # 实例化LSTM层
  15. self.lstm_layer = nn.LSTM(input_size, hidden_size, batch_first=True, bidirectional=True)
  16. # 实例化聚合层
  17. self.average_layer = AveragePooling()
  18. # 实例化输出层
  19. self.output_layer = nn.Linear(hidden_size * 2, num_classes)
  20. def forward(self, inputs):
  21. # 对模型输入拆分为序列数据和mask
  22. input_ids, sequence_length = inputs
  23. # 获取词向量
  24. inputs_emb = self.embedding_layer(input_ids)
  25. packed_input = nn.utils.rnn.pack_padded_sequence(inputs_emb, sequence_length.cpu(), batch_first=True,
  26. enforce_sorted=False)
  27. # 使用lstm处理数据
  28. packed_output, _ = self.lstm_layer(packed_input)
  29. # 解包输出
  30. sequence_output, _ = nn.utils.rnn.pad_packed_sequence(packed_output, batch_first=True)
  31. # 使用聚合层聚合sequence_output
  32. batch_mean_hidden = self.average_layer(sequence_output, sequence_length)
  33. # 输出文本分类logits
  34. logits = self.output_layer(batch_mean_hidden)
  35. return logits

        不同于实验书中的代码,我发现pytorch的nn.LSTM不支持变长序列的参数不同于paddleLSTM,需要手动打包成边长序列下面为paddle.nn.LSTM的输入参数,所以转化为pytorch,我们则需要借助torch.nn.utils.rnn.pad_packed_sequence,压缩数据达到一样的目的(当然也有其他方法,这种方式我觉得比较简单)

LSTM-API文档-PaddlePaddle深度学习平台

3 模型训练

        本节将基于RunnerV3进行训练,首先指定模型训练的超参,然后设定模型、优化器、损失函数和评估指标,其中损失函数使用 torch.nn.CrossEntropyLoss ,该损失函数内部会对预测结果使用 softmax 进行计算,数字预测模型输出层的输出 logits 不需要使用 softmax 进行归一化,定义完Runner的相关组件后,便可以进行模型训练。代码实现如下

nndl.py 在本节需要实现如下功能

  1. import torch
  2. import matplotlib.pyplot as plt
  3. class RunnerV3(object):
  4. def __init__(self, model, optimizer, loss_fn, metric, **kwargs):
  5. self.model = model
  6. self.optimizer = optimizer
  7. self.loss_fn = loss_fn
  8. self.metric = metric # 只用于计算评价指标
  9. # 记录训练过程中的评价指标变化情况
  10. self.dev_scores = []
  11. # 记录训练过程中的损失函数变化情况
  12. self.train_epoch_losses = [] # 一个epoch记录一次loss
  13. self.train_step_losses = [] # 一个step记录一次loss
  14. self.dev_losses = []
  15. # 记录全局最优指标
  16. self.best_score = 0
  17. def train(self, train_loader, dev_loader=None, **kwargs):
  18. # 将模型切换为训练模式
  19. self.model.train()
  20. # 传入训练轮数,如果没有传入值则默认为0
  21. num_epochs = kwargs.get("num_epochs", 0)
  22. # 传入log打印频率,如果没有传入值则默认为100
  23. log_steps = kwargs.get("log_steps", 100)
  24. # 评价频率
  25. eval_steps = kwargs.get("eval_steps", 0)
  26. # 传入模型保存路径,如果没有传入值则默认为"best_model.pdparams"
  27. save_path = kwargs.get("save_path", "best_model.pdparams")
  28. custom_print_log = kwargs.get("custom_print_log", None)
  29. # 训练总的步数
  30. num_training_steps = num_epochs * len(train_loader)
  31. if eval_steps:
  32. if self.metric is None:
  33. raise RuntimeError('Error: Metric can not be None!')
  34. if dev_loader is None:
  35. raise RuntimeError('Error: dev_loader can not be None!')
  36. # 运行的step数目
  37. global_step = 0
  38. total_acces = []
  39. total_losses = []
  40. Iters = []
  41. # 进行num_epochs轮训练
  42. for epoch in range(num_epochs):
  43. # 用于统计训练集的损失
  44. total_loss = 0
  45. for step, data in enumerate(train_loader):
  46. X, y = data
  47. # 获取模型预测
  48. # 计算logits
  49. logits = self.model(X)
  50. # 将y转换为和logits相同的形状
  51. acc_y = y.view(-1, 1)
  52. # 计算准确率
  53. probs = torch.softmax(logits, dim=1)
  54. pred = torch.argmax(probs, dim=1)
  55. correct = (pred == acc_y).sum().item()
  56. total = acc_y.size(0)
  57. acc = correct / total
  58. total_acces.append(acc)
  59. # print(acc.numpy()[0])
  60. loss = self.loss_fn(logits, y) # 默认求mean
  61. total_loss += loss
  62. total_losses.append(loss.item())
  63. Iters.append(global_step)
  64. # 训练过程中,每个step的loss进行保存
  65. self.train_step_losses.append((global_step, loss.item()))
  66. if log_steps and global_step % log_steps == 0:
  67. print(
  68. f"[Train] epoch: {epoch}/{num_epochs}, step: {global_step}/{num_training_steps}, loss: {loss.item():.5f}")
  69. # 梯度反向传播,计算每个参数的梯度值
  70. loss.backward()
  71. if custom_print_log:
  72. custom_print_log(self)
  73. # 小批量梯度下降进行参数更新
  74. self.optimizer.step()
  75. # 梯度归零
  76. self.optimizer.zero_grad()
  77. # 判断是否需要评价
  78. if eval_steps > 0 and global_step != 0 and \
  79. (global_step % eval_steps == 0 or global_step == (num_training_steps - 1)):
  80. dev_score, dev_loss = self.evaluate(dev_loader, global_step=global_step)
  81. print(f"[Evaluate] dev score: {dev_score:.5f}, dev loss: {dev_loss:.5f}")
  82. # 将模型切换为训练模式
  83. self.model.train()
  84. # 如果当前指标为最优指标,保存该模型
  85. if dev_score > self.best_score:
  86. self.save_model(save_path)
  87. print(
  88. f"[Evaluate] best accuracy performence has been updated: {self.best_score:.5f} --> {dev_score:.5f}")
  89. self.best_score = dev_score
  90. global_step += 1
  91. # 当前epoch 训练loss累计值
  92. trn_loss = (total_loss / len(train_loader)).item()
  93. # epoch粒度的训练loss保存
  94. self.train_epoch_losses.append(trn_loss)
  95. draw_process("trainning acc", "green", Iters, total_acces, "trainning acc")
  96. print("total_acc:")
  97. print(total_acces)
  98. print("total_loss:")
  99. print(total_losses)
  100. print("[Train] Training done!")
  101. # 模型评估阶段,使用'paddle.no_grad()'控制不计算和存储梯度
  102. @torch.no_grad()
  103. def evaluate(self, dev_loader, **kwargs):
  104. assert self.metric is not None
  105. # 将模型设置为评估模式
  106. self.model.eval()
  107. global_step = kwargs.get("global_step", -1)
  108. # 用于统计训练集的损失
  109. total_loss = 0
  110. # 重置评价
  111. self.metric.reset()
  112. # 遍历验证集每个批次
  113. for batch_id, data in enumerate(dev_loader):
  114. X, y = data
  115. # 计算模型输出
  116. logits = self.model(X)
  117. # 计算损失函数
  118. loss = self.loss_fn(logits, y).item()
  119. # 累积损失
  120. total_loss += loss
  121. # 累积评价
  122. self.metric.update(logits, y)
  123. dev_loss = (total_loss / len(dev_loader))
  124. self.dev_losses.append((global_step, dev_loss))
  125. dev_score = self.metric.accumulate()
  126. self.dev_scores.append(dev_score)
  127. return dev_score, dev_loss
  128. # 模型评估阶段,使用'paddle.no_grad()'控制不计算和存储梯度
  129. @torch.no_grad()
  130. def predict(self, x, **kwargs):
  131. # 将模型设置为评估模式
  132. self.model.eval()
  133. # 运行模型前向计算,得到预测值
  134. logits = self.model(x)
  135. return logits
  136. def save_model(self, save_path):
  137. torch.save(self.model.state_dict(), save_path)
  138. def load_model(self, model_path):
  139. model_state_dict = torch.load(model_path)
  140. self.model.load_state_dict(model_state_dict)
  141. class Accuracy():
  142. def __init__(self, is_logist=True):
  143. # 用于统计正确的样本个数
  144. self.num_correct = 0
  145. # 用于统计样本的总数
  146. self.num_count = 0
  147. self.is_logist = is_logist
  148. def update(self, outputs, labels):
  149. # 判断是二分类任务还是多分类任务,shape[1]=1时为二分类任务,shape[1]>1时为多分类任务
  150. if outputs.shape[1] == 1: # 二分类
  151. outputs = torch.squeeze(outputs, dim=-1)
  152. if self.is_logist:
  153. # logist判断是否大于0
  154. preds = torch.tensor((outputs >= 0), dtype=torch.float32)
  155. else:
  156. # 如果不是logist,判断每个概率值是否大于0.5,当大于0.5时,类别为1,否则类别为0
  157. preds = torch.tensor((outputs >= 0.5), dtype=torch.float32)
  158. else:
  159. # 多分类时,使用'torch.argmax'计算最大元素索引作为类别
  160. preds = torch.argmax(outputs, dim=1)
  161. # 获取本批数据中预测正确的样本个数
  162. labels = torch.squeeze(labels, dim=-1)
  163. batch_correct = torch.sum(torch.tensor(preds == labels, dtype=torch.float32)).cpu().numpy()
  164. batch_count = len(labels)
  165. # 更新num_correct 和 num_count
  166. self.num_correct += batch_correct
  167. self.num_count += batch_count
  168. def accumulate(self):
  169. # 使用累计的数据,计算总的指标
  170. if self.num_count == 0:
  171. return 0
  172. return self.num_correct / self.num_count
  173. def reset(self):
  174. # 重置正确的数目和总数
  175. self.num_correct = 0
  176. self.num_count = 0
  177. def name(self):
  178. return "Accuracy"
  179. def draw_process(title,color,iters,data,label):
  180. plt.title(title, fontsize=24)
  181. plt.xlabel("iter", fontsize=20)
  182. plt.ylabel(label, fontsize=20)
  183. plt.plot(iters, data,color=color,label=label)
  184. plt.legend()
  185. plt.grid()
  186. print(plt.show())

lstm.py 开始训练

  1. np.random.seed(0)
  2. random.seed(0)
  3. torch.seed()
  4. # 指定训练轮次
  5. num_epochs = 3
  6. # 指定学习率
  7. learning_rate = 0.001
  8. # 指定embedding的数量为词表长度
  9. num_embeddings = len(word2id_dict)
  10. # embedding向量的维度
  11. input_size = 256
  12. # LSTM网络隐状态向量的维度
  13. hidden_size = 256
  14. # 实例化模型
  15. model = Model_BiLSTM_FC(num_embeddings, input_size, hidden_size).to(device)
  16. # 指定优化器
  17. optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.999))
  18. # 指定损失函数
  19. loss_fn = nn.CrossEntropyLoss()
  20. # 指定评估指标
  21. metric = Accuracy()
  22. # 实例化Runner
  23. runner = RunnerV3(model, optimizer, loss_fn, metric)
  24. # 模型训练
  25. start_time = time.time()
  26. runner.train(train_loader, dev_loader, num_epochs=num_epochs, eval_steps=10, log_steps=10, save_path="./checkpoints/best.pdparams")
  27. end_time = time.time()
  28. print("time: ", (end_time-start_time))

         输出结果如下:

        绘制训练过程中在训练集和验证集上的损失图像和在验证集上的准确率图像:

  1. from nndl import plot_training_loss_acc
  2. # 图像名字
  3. fig_name = "./images/6.16.pdf"
  4. # sample_step: 训练损失的采样step,即每隔多少个点选择1个点绘制
  5. # loss_legend_loc: loss 图像的图例放置位置
  6. # acc_legend_loc: acc 图像的图例放置位置
  7. plot_training_loss_acc(runner, fig_name, fig_size=(16,6), sample_step=10, loss_legend_loc="lower left", acc_legend_loc="lower right")

        需要在nndl.py 中实现 plot_training_loss_acc,代码如下:

  1. def plot_training_loss_acc(runner, fig_name, fig_size=(16, 6), sample_step=10, loss_legend_loc="lower left",
  2. acc_legend_loc="lower left"):
  3. plt.figure(figsize=fig_size)
  4. plt.subplot(1, 2, 1)
  5. train_items = runner.train_step_losses[::sample_step]
  6. train_steps = [x[0] for x in train_items]
  7. train_losses = [x[1] for x in train_items]
  8. plt.plot(train_steps, train_losses, color='#8E004D', label="Train loss")
  9. while runner.dev_losses[-1][0] == -1:
  10. runner.dev_losses.pop()
  11. runner.dev_scores.pop()
  12. dev_steps = [x[0] for x in runner.dev_losses]
  13. dev_losses = [x[1] for x in runner.dev_losses]
  14. plt.plot(dev_steps, dev_losses, color='#E20079', linestyle='--', label="Dev loss")
  15. # 绘制坐标轴和图例
  16. plt.ylabel("loss", fontsize='x-large')
  17. plt.xlabel("step", fontsize='x-large')
  18. plt.legend(loc=loss_legend_loc, fontsize='x-large')
  19. plt.subplot(1, 2, 2)
  20. # 绘制评价准确率变化曲线
  21. plt.plot(dev_steps, runner.dev_scores, color='#E20079', linestyle="--", label="Dev accuracy")
  22. # 绘制坐标轴和图例
  23. plt.ylabel("score", fontsize='x-large')
  24. plt.xlabel("step", fontsize='x-large')
  25. plt.legend(loc=acc_legend_loc, fontsize='x-large')
  26. plt.savefig(fig_name)
  27. plt.show()

        输出如下:

        展示了文本分类模型在训练过程中的损失曲线和在验证集上的准确率曲线,其中在损失图像中,实线表示训练集上的损失变化,虚线表示验证集上的损失变化. 可以看到,随着训练过程的进行,训练集的损失不断下降, 验证集上的损失在大概200步后开始上升,这是因为在训练过程中发生了过拟合,可以选择保存在训练过程中在验证集上效果最好的模型来解决这个问题. 从准确率曲线上可以看到,首先在验证集上的准确率大幅度上升,然后大概200步后准确率不再上升,并且由于过拟合的因素,在验证集上的准确率稍微降低。 

4 模型评价

        加载训练过程中效果最好的模型,然后使用测试集进行测试。

  1. model_path = "./checkpoints/best.pdparams"
  2. runner.load_model(model_path)
  3. accuracy, _ = runner.evaluate(test_loader)
  4. print(f"Evaluate on test set, Accuracy: {accuracy:.5f}")

        输出如下:

5 模型预测

        给定任意的一句话,使用训练好的模型进行预测,判断这句话中所蕴含的情感极性。

  1. id2label={0:"消极情绪", 1:"积极情绪"}
  2. text = "this movie is so great. I watched it three times already"
  3. # 处理单条文本
  4. sentence = text.split(" ")
  5. words = [word2id_dict[word] if word in word2id_dict else word2id_dict['[UNK]'] for word in sentence]
  6. words = words[:max_seq_len]
  7. sequence_length = torch.tensor([len(words)], dtype=torch.int64)
  8. words = torch.tensor(words, dtype=torch.int64).unsqueeze(0)
  9. # 使用模型进行预测
  10. logits = runner.predict((words.to(device), sequence_length.to(device)))
  11. max_label_id = torch.argmax(logits, dim=-1).cpu().numpy()[0]
  12. pred_label = id2label[max_label_id]
  13. print("Label: ", pred_label)

        输出结果:

5 拓展实验 

        5.1 使用Pytorch内置的单向LSTM进行文本分类实验

        首先,修改模型定义,将 nn.LSTM 中的 bidirectional 设置为 False 以使用单向LSTM模型(也可以删去此参数,默认为False),同时设置线性层的shape为[hidden_size, num_classes]

  1. class AveragePooling(nn.Module):
  2. def __init__(self):
  3. super(AveragePooling, self).__init__()
  4. def forward(self, sequence_output, sequence_length):
  5. # 假设 sequence_length 是一个 PyTorch 张量
  6. sequence_length = sequence_length.unsqueeze(-1).to(torch.float32)
  7. # 根据sequence_length生成mask矩阵,用于对Padding位置的信息进行mask
  8. max_len = sequence_output.shape[1]
  9. mask = torch.arange(max_len) < sequence_length
  10. mask = mask.to(torch.float32).unsqueeze(-1)
  11. # 对序列中paddling部分进行mask
  12. sequence_output = torch.multiply(sequence_output, mask.to('cuda'))
  13. # 对序列中的向量取均值
  14. batch_mean_hidden = torch.divide(torch.sum(sequence_output, dim=1), sequence_length.to('cuda'))
  15. return batch_mean_hidden
  16. class Model_BiLSTM_FC(nn.Module):
  17. def __init__(self, num_embeddings, input_size, hidden_size, num_classes=2):
  18. super(Model_BiLSTM_FC, self).__init__()
  19. # 词典大小
  20. self.num_embeddings = num_embeddings
  21. # 单词向量的维度
  22. self.input_size = input_size
  23. # LSTM隐藏单元数量
  24. self.hidden_size = hidden_size
  25. # 情感分类类别数量
  26. self.num_classes = num_classes
  27. # 实例化嵌入层
  28. self.embedding_layer = nn.Embedding(num_embeddings, input_size, padding_idx=0)
  29. # 实例化LSTM层
  30. self.lstm_layer = nn.LSTM(input_size, hidden_size, batch_first=True)
  31. # 实例化聚合层
  32. self.average_layer = AveragePooling()
  33. # 实例化输出层
  34. self.output_layer = nn.Linear(hidden_size, num_classes)
  35. def forward(self, inputs):
  36. # 对模型输入拆分为序列数据和mask
  37. input_ids, sequence_length = inputs
  38. # 获取词向量
  39. inputs_emb = self.embedding_layer(input_ids)
  40. packed_input = nn.utils.rnn.pack_padded_sequence(inputs_emb, sequence_length.cpu(), batch_first=True,
  41. enforce_sorted=False)
  42. # 使用lstm处理数据
  43. packed_output, _ = self.lstm_layer(packed_input)
  44. # 解包输出
  45. sequence_output, _ = nn.utils.rnn.pad_packed_sequence(packed_output, batch_first=True)
  46. # 使用聚合层聚合sequence_output
  47. batch_mean_hidden = self.average_layer(sequence_output, sequence_length)
  48. # 输出文本分类logits
  49. logits = self.output_layer(batch_mean_hidden)
  50. return logits

        接下来,基于Paddle的单向模型开始进行训练,代码实现如下:

  1. np.random.seed(0)
  2. random.seed(0)
  3. torch.seed()
  4. # 指定训练轮次
  5. num_epochs = 3
  6. # 指定学习率
  7. learning_rate = 0.001
  8. # 指定embedding的数量为词表长度
  9. num_embeddings = len(word2id_dict)
  10. # embedding向量的维度
  11. input_size = 256
  12. # LSTM网络隐状态向量的维度
  13. hidden_size = 256
  14. # 实例化模型
  15. model = Model_BiLSTM_FC(num_embeddings, input_size, hidden_size).to(device)
  16. # 指定优化器
  17. optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.999))
  18. # 指定损失函数
  19. loss_fn = nn.CrossEntropyLoss()
  20. # 指定评估指标
  21. metric = Accuracy()
  22. # 实例化Runner
  23. runner = RunnerV3(model, optimizer, loss_fn, metric)
  24. # 模型训练
  25. start_time = time.time()
  26. runner.train(train_loader, dev_loader, num_epochs=num_epochs, eval_steps=10, log_steps=10, save_path="./checkpoints/best_forward.pdparams")
  27. end_time = time.time()
  28. print("time: ", (end_time-start_time))

        其他代码均与LSTM双向代码一致(我也忘了,写了很久了,最后回放全部代码,能跑通的)

         5.2 使用Paddle内置的单向LSTM进行文本分类实验

        由于之前实现的LSTM默认只返回最后时刻的隐状态,然而本实验中需要用到所有时刻的隐状态向量,因此需要对自己实现的LSTM进行修改,使其返回序列向量,代码实现如下:

  1. class LSTM(nn.Module):
  2. def __init__(self, input_size, hidden_size, Wi_attr=None, Wf_attr=None, Wo_attr=None, Wc_attr=None,
  3. Ui_attr=None, Uf_attr=None, Uo_attr=None, Uc_attr=None, bi_attr=None, bf_attr=None,
  4. bo_attr=None, bc_attr=None):
  5. super(LSTM, self).__init__()
  6. self.input_size = input_size
  7. self.hidden_size = hidden_size
  8. # 初始化模型参数
  9. if Wi_attr is None:
  10. Wi = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
  11. else:
  12. Wi = torch.tensor(Wi_attr, dtype=torch.float32)
  13. self.W_i = torch.nn.Parameter(Wi.to('cuda'))
  14. if Wf_attr is None:
  15. Wf = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
  16. else:
  17. Wf = torch.tensor(Wf_attr, dtype=torch.float32)
  18. self.W_f = torch.nn.Parameter(Wf.to('cuda'))
  19. if Wo_attr is None:
  20. Wo = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
  21. else:
  22. Wo = torch.tensor(Wo_attr, dtype=torch.float32)
  23. self.W_o = torch.nn.Parameter(Wo.to('cuda'))
  24. if Wc_attr is None:
  25. Wc = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
  26. else:
  27. Wc = torch.tensor(Wc_attr, dtype=torch.float32)
  28. self.W_c = torch.nn.Parameter(Wc.to('cuda'))
  29. if Ui_attr is None:
  30. Ui = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
  31. else:
  32. Ui = torch.tensor(Ui_attr, dtype=torch.float32)
  33. self.U_i = torch.nn.Parameter(Ui.to('cuda'))
  34. if Uf_attr is None:
  35. Uf = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
  36. else:
  37. Uf = torch.tensor(Uf_attr, dtype=torch.float32)
  38. self.U_f = torch.nn.Parameter(Uf.to('cuda'))
  39. if Uo_attr is None:
  40. Uo = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
  41. else:
  42. Uo = torch.tensor(Uo_attr, dtype=torch.float32)
  43. self.U_o = torch.nn.Parameter(Uo.to('cuda'))
  44. if Uc_attr is None:
  45. Uc = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
  46. else:
  47. Uc = torch.tensor(Uc_attr, dtype=torch.float32)
  48. self.U_c = torch.nn.Parameter(Uc.to('cuda'))
  49. if bi_attr is None:
  50. bi = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
  51. else:
  52. bi = torch.tensor(bi_attr, dtype=torch.float32)
  53. self.b_i = torch.nn.Parameter(bi.to('cuda'))
  54. if bf_attr is None:
  55. bf = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
  56. else:
  57. bf = torch.tensor(bf_attr, dtype=torch.float32)
  58. self.b_f = torch.nn.Parameter(bf.to('cuda'))
  59. if bo_attr is None:
  60. bo = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
  61. else:
  62. bo = torch.tensor(bo_attr, dtype=torch.float32)
  63. self.b_o = torch.nn.Parameter(bo.to('cuda'))
  64. if bc_attr is None:
  65. bc = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
  66. else:
  67. bc = torch.tensor(bc_attr, dtype=torch.float32)
  68. self.b_c = torch.nn.Parameter(bc.to('cuda'))
  69. # 初始化状态向量和隐状态向量
  70. def init_state(self, batch_size):
  71. hidden_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32).to('cuda')
  72. cell_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32).to('cuda')
  73. return hidden_state, cell_state
  74. # 定义前向计算
  75. def forward(self, inputs, states=None):
  76. # inputs: 输入数据,其shape为batch_size x seq_len x input_size
  77. batch_size, seq_len, input_size = torch.tensor(inputs).shape
  78. # 初始化起始的单元状态和隐状态向量,其shape为batch_size x hidden_size
  79. if states is None:
  80. states = self.init_state(batch_size)
  81. hidden_state, cell_state = states
  82. # 执行LSTM计算,包括:输入门、遗忘门和输出门、候选内部状态、内部状态和隐状态向量
  83. for step in range(seq_len):
  84. # 获取当前时刻的输入数据step_input: 其shape为batch_size x input_size
  85. step_input = inputs[:, step, :]
  86. # 计算输入门, 遗忘门和输出门, 其shape为:batch_size x hidden_size
  87. I_gate = F.sigmoid(torch.matmul(step_input, self.W_i) + torch.matmul(hidden_state, self.U_i) + self.b_i)
  88. F_gate = F.sigmoid(torch.matmul(step_input, self.W_f) + torch.matmul(hidden_state, self.U_f) + self.b_f)
  89. O_gate = F.sigmoid(torch.matmul(step_input, self.W_o) + torch.matmul(hidden_state, self.U_o) + self.b_o)
  90. # 计算候选状态向量, 其shape为:batch_size x hidden_size
  91. C_tilde = F.tanh(torch.matmul(step_input, self.W_c) + torch.matmul(hidden_state, self.U_c) + self.b_c)
  92. # 计算单元状态向量, 其shape为:batch_size x hidden_size
  93. cell_state = F_gate * cell_state + I_gate * C_tilde
  94. # 计算隐状态向量,其shape为:batch_size x hidden_size
  95. hidden_state = O_gate * F.tanh(cell_state)
  96. return hidden_state

        接下来,修改Model_BiLSTM_FC模型,将`nn.LSTM`换为自己实现的LSTM模型,代码实现如下 :

  1. class AveragePooling(nn.Module):
  2. def __init__(self):
  3. super(AveragePooling, self).__init__()
  4. def forward(self, sequence_output, sequence_length):
  5. # 假设 sequence_length 是一个 PyTorch 张量
  6. sequence_length = sequence_length.unsqueeze(-1).to(torch.float32)
  7. # 根据sequence_length生成mask矩阵,用于对Padding位置的信息进行mask
  8. max_len = sequence_output.shape[1]
  9. mask = torch.arange(max_len, device='cuda') < sequence_length.to('cuda')
  10. mask = mask.to(torch.float32).unsqueeze(-1)
  11. # 对序列中paddling部分进行mask
  12. sequence_output = torch.multiply(sequence_output, mask.to('cuda'))
  13. # 对序列中的向量取均值
  14. batch_mean_hidden = torch.divide(torch.sum(sequence_output, dim=1), sequence_length.to('cuda'))
  15. return batch_mean_hidden
  16. class Model_BiLSTM_FC(nn.Module):
  17. def __init__(self, num_embeddings, input_size, hidden_size, num_classes=2):
  18. super(Model_BiLSTM_FC, self).__init__()
  19. # 词典大小
  20. self.num_embeddings = num_embeddings
  21. # 单词向量的维度
  22. self.input_size = input_size
  23. # LSTM隐藏单元数量
  24. self.hidden_size = hidden_size
  25. # 情感分类类别数量
  26. self.num_classes = num_classes
  27. # 实例化嵌入层
  28. self.embedding_layer = nn.Embedding(num_embeddings, input_size, padding_idx=0)
  29. # 实例化LSTM层
  30. self.lstm_layer = nn.LSTM(input_size, hidden_size, batch_first=True, bidirectional=True)
  31. # 实例化聚合层
  32. self.average_layer = AveragePooling()
  33. # 实例化输出层
  34. self.output_layer = nn.Linear(hidden_size * 2, num_classes)
  35. def forward(self, inputs):
  36. # 对模型输入拆分为序列数据和mask
  37. input_ids, sequence_length = inputs
  38. # 获取词向量
  39. inputs_emb = self.embedding_layer(input_ids)
  40. packed_input = nn.utils.rnn.pack_padded_sequence(inputs_emb, sequence_length.cpu(), batch_first=True,
  41. enforce_sorted=False)
  42. # 使用lstm处理数据
  43. packed_output, _ = self.lstm_layer(packed_input)
  44. # 解包输出
  45. sequence_output, _ = nn.utils.rnn.pad_packed_sequence(packed_output, batch_first=True)
  46. # 使用聚合层聚合sequence_output
  47. batch_mean_hidden = self.average_layer(sequence_output, sequence_length)
  48. # 输出文本分类logits
  49. logits = self.output_layer(batch_mean_hidden)
  50. return logits

        运行并测试,代码如下:

  1. np.random.seed(0)
  2. random.seed(0)
  3. torch.seed()
  4. # 指定训练轮次
  5. num_epochs = 3
  6. # 指定学习率
  7. learning_rate = 0.001
  8. # 指定embedding的数量为词表长度
  9. num_embeddings = len(word2id_dict)
  10. # embedding向量的维度
  11. input_size = 256
  12. # LSTM网络隐状态向量的维度
  13. hidden_size = 256
  14. # 实例化模型
  15. model = Model_BiLSTM_FC(num_embeddings, input_size, hidden_size).to('cuda')
  16. # 指定优化器
  17. optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.999))
  18. # 指定损失函数
  19. loss_fn = nn.CrossEntropyLoss()
  20. # 指定评估指标
  21. metric = Accuracy()
  22. # 实例化Runner
  23. runner = RunnerV3(model, optimizer, loss_fn, metric)
  24. # 模型训练
  25. start_time = time.time()
  26. runner.train(train_loader, dev_loader, num_epochs=num_epochs, eval_steps=10, log_steps=10, save_path="./checkpoints/best_self_forward.pdparams")
  27. end_time = time.time()
  28. print("time: ", (end_time-start_time))
  29. model_path = "./checkpoints/best_self_forward.pdparams"
  30. runner.load_model(model_path)
  31. accuracy, _ = runner.evaluate(test_loader)
  32. print(f"Evaluate on test set, Accuracy: {accuracy:.5f}")

        总结

        根据双向LSTM和单向LSTM的对比我们发现,正确率差距不大,但是速度缺相差较大,但理论上说双向LSTM具有比单向LSTM更多的参数,计算量更大,速度很慢,在这里却表现为速度大于单向,所以双向LSTM在有些时候是好于单向LSTM的,并且通过这次实验更能清晰的明白NLP的魅力

        我感觉这次的实验难度来说是我这么多次最难的一次,因为有两个函数在邱老师的《神经网络与深度学习》的书中没给代码,需要自己编写(虽然写完了才发现可以去飞浆搜原版paddle的代码,害傻了),并且如果不修改gpu代码跑起来真的很慢,真的体会到老师说的跑半天出去溜达会,回来发现错了,改完接着跑那种感觉,虽然没有半天,跑一次40分钟,最终我也被迫在现实面前低头改了半天gpu,但是伴随而来的又是需要在很多位置都修改变量在gpu,又花了很多时间,但是回顾一下,真的自己修改+编写,感觉这次实验真的很有意义,远远大于之前所有时间带给我的成就感。(PS:头一次这么快发博客,竟然是因为一宿舍等着我的代码,乐死了)

放一下所有代码

utils-data.py

  1. import os
  2. def load_vocab(path):
  3. assert os.path.exists(path)
  4. words = []
  5. with open(path, "r", encoding="utf-8") as f:
  6. words = f.readlines()
  7. words = [word.strip() for word in words if word.strip()]
  8. word2id = dict(zip(words, range(len(words))))
  9. return word2id

nndl.py

  1. import torch
  2. import matplotlib.pyplot as plt
  3. def draw_process(title,color,iters,data,label):
  4. plt.title(title, fontsize=24)
  5. plt.xlabel("iter", fontsize=20)
  6. plt.ylabel(label, fontsize=20)
  7. plt.plot(iters, data,color=color,label=label)
  8. plt.legend()
  9. plt.grid()
  10. print(plt.show())
  11. def plot_training_loss_acc(runner, fig_name, fig_size=(16, 6), sample_step=10, loss_legend_loc="lower left",
  12. acc_legend_loc="lower left"):
  13. plt.figure(figsize=fig_size)
  14. plt.subplot(1, 2, 1)
  15. train_items = runner.train_step_losses[::sample_step]
  16. train_steps = [x[0] for x in train_items]
  17. train_losses = [x[1] for x in train_items]
  18. plt.plot(train_steps, train_losses, color='#8E004D', label="Train loss")
  19. while runner.dev_losses[-1][0] == -1:
  20. runner.dev_losses.pop()
  21. runner.dev_scores.pop()
  22. dev_steps = [x[0] for x in runner.dev_losses]
  23. dev_losses = [x[1] for x in runner.dev_losses]
  24. plt.plot(dev_steps, dev_losses, color='#E20079', linestyle='--', label="Dev loss")
  25. # 绘制坐标轴和图例
  26. plt.ylabel("loss", fontsize='x-large')
  27. plt.xlabel("step", fontsize='x-large')
  28. plt.legend(loc=loss_legend_loc, fontsize='x-large')
  29. plt.subplot(1, 2, 2)
  30. # 绘制评价准确率变化曲线
  31. plt.plot(dev_steps, runner.dev_scores, color='#E20079', linestyle="--", label="Dev accuracy")
  32. # 绘制坐标轴和图例
  33. plt.ylabel("score", fontsize='x-large')
  34. plt.xlabel("step", fontsize='x-large')
  35. plt.legend(loc=acc_legend_loc, fontsize='x-large')
  36. plt.savefig(fig_name)
  37. plt.show()
  38. class RunnerV3(object):
  39. def __init__(self, model, optimizer, loss_fn, metric, **kwargs):
  40. self.model = model
  41. self.optimizer = optimizer
  42. self.loss_fn = loss_fn
  43. self.metric = metric # 只用于计算评价指标
  44. # 记录训练过程中的评价指标变化情况
  45. self.dev_scores = []
  46. # 记录训练过程中的损失函数变化情况
  47. self.train_epoch_losses = [] # 一个epoch记录一次loss
  48. self.train_step_losses = [] # 一个step记录一次loss
  49. self.dev_losses = []
  50. # 记录全局最优指标
  51. self.best_score = 0
  52. def train(self, train_loader, dev_loader=None, **kwargs):
  53. # 将模型切换为训练模式
  54. self.model.train()
  55. # 传入训练轮数,如果没有传入值则默认为0
  56. num_epochs = kwargs.get("num_epochs", 0)
  57. # 传入log打印频率,如果没有传入值则默认为100
  58. log_steps = kwargs.get("log_steps", 100)
  59. # 评价频率
  60. eval_steps = kwargs.get("eval_steps", 0)
  61. # 传入模型保存路径,如果没有传入值则默认为"best_model.pdparams"
  62. save_path = kwargs.get("save_path", "best_model.pdparams")
  63. custom_print_log = kwargs.get("custom_print_log", None)
  64. # 训练总的步数
  65. num_training_steps = num_epochs * len(train_loader)
  66. if eval_steps:
  67. if self.metric is None:
  68. raise RuntimeError('Error: Metric can not be None!')
  69. if dev_loader is None:
  70. raise RuntimeError('Error: dev_loader can not be None!')
  71. # 运行的step数目
  72. global_step = 0
  73. total_acces = []
  74. total_losses = []
  75. Iters = []
  76. # 进行num_epochs轮训练
  77. for epoch in range(num_epochs):
  78. # 用于统计训练集的损失
  79. total_loss = 0
  80. for step, data in enumerate(train_loader):
  81. X, y = data
  82. # 获取模型预测
  83. # 计算logits
  84. logits = self.model(X)
  85. # 将y转换为和logits相同的形状
  86. acc_y = y.view(-1, 1)
  87. # 计算准确率
  88. probs = torch.softmax(logits, dim=1)
  89. pred = torch.argmax(probs, dim=1)
  90. correct = (pred == acc_y).sum().item()
  91. total = acc_y.size(0)
  92. acc = correct / total
  93. total_acces.append(acc)
  94. # print(acc.numpy()[0])
  95. loss = self.loss_fn(logits, y) # 默认求mean
  96. total_loss += loss
  97. total_losses.append(loss.item())
  98. Iters.append(global_step)
  99. # 训练过程中,每个step的loss进行保存
  100. self.train_step_losses.append((global_step, loss.item()))
  101. if log_steps and global_step % log_steps == 0:
  102. print(
  103. f"[Train] epoch: {epoch}/{num_epochs}, step: {global_step}/{num_training_steps}, loss: {loss.item():.5f}")
  104. # 梯度反向传播,计算每个参数的梯度值
  105. loss.backward()
  106. if custom_print_log:
  107. custom_print_log(self)
  108. # 小批量梯度下降进行参数更新
  109. self.optimizer.step()
  110. # 梯度归零
  111. self.optimizer.zero_grad()
  112. # 判断是否需要评价
  113. if eval_steps > 0 and global_step != 0 and \
  114. (global_step % eval_steps == 0 or global_step == (num_training_steps - 1)):
  115. dev_score, dev_loss = self.evaluate(dev_loader, global_step=global_step)
  116. print(f"[Evaluate] dev score: {dev_score:.5f}, dev loss: {dev_loss:.5f}")
  117. # 将模型切换为训练模式
  118. self.model.train()
  119. # 如果当前指标为最优指标,保存该模型
  120. if dev_score > self.best_score:
  121. self.save_model(save_path)
  122. print(
  123. f"[Evaluate] best accuracy performence has been updated: {self.best_score:.5f} --> {dev_score:.5f}")
  124. self.best_score = dev_score
  125. global_step += 1
  126. # 当前epoch 训练loss累计值
  127. trn_loss = (total_loss / len(train_loader)).item()
  128. # epoch粒度的训练loss保存
  129. self.train_epoch_losses.append(trn_loss)
  130. draw_process("trainning acc", "green", Iters, total_acces, "trainning acc")
  131. print("total_acc:")
  132. print(total_acces)
  133. print("total_loss:")
  134. print(total_losses)
  135. print("[Train] Training done!")
  136. # 模型评估阶段,使用'paddle.no_grad()'控制不计算和存储梯度
  137. @torch.no_grad()
  138. def evaluate(self, dev_loader, **kwargs):
  139. assert self.metric is not None
  140. # 将模型设置为评估模式
  141. self.model.eval()
  142. global_step = kwargs.get("global_step", -1)
  143. # 用于统计训练集的损失
  144. total_loss = 0
  145. # 重置评价
  146. self.metric.reset()
  147. # 遍历验证集每个批次
  148. for batch_id, data in enumerate(dev_loader):
  149. X, y = data
  150. # 计算模型输出
  151. logits = self.model(X)
  152. # 计算损失函数
  153. loss = self.loss_fn(logits, y).item()
  154. # 累积损失
  155. total_loss += loss
  156. # 累积评价
  157. self.metric.update(logits, y)
  158. dev_loss = (total_loss / len(dev_loader))
  159. self.dev_losses.append((global_step, dev_loss))
  160. dev_score = self.metric.accumulate()
  161. self.dev_scores.append(dev_score)
  162. return dev_score, dev_loss
  163. # 模型评估阶段,使用'paddle.no_grad()'控制不计算和存储梯度
  164. @torch.no_grad()
  165. def predict(self, x, **kwargs):
  166. # 将模型设置为评估模式
  167. self.model.eval()
  168. # 运行模型前向计算,得到预测值
  169. logits = self.model(x)
  170. return logits
  171. def save_model(self, save_path):
  172. torch.save(self.model.state_dict(), save_path)
  173. def load_model(self, model_path):
  174. model_state_dict = torch.load(model_path)
  175. self.model.load_state_dict(model_state_dict)
  176. class Accuracy():
  177. def __init__(self, is_logist=True):
  178. # 用于统计正确的样本个数
  179. self.num_correct = 0
  180. # 用于统计样本的总数
  181. self.num_count = 0
  182. self.is_logist = is_logist
  183. def update(self, outputs, labels):
  184. # 判断是二分类任务还是多分类任务,shape[1]=1时为二分类任务,shape[1]>1时为多分类任务
  185. if outputs.shape[1] == 1: # 二分类
  186. outputs = torch.squeeze(outputs, dim=-1)
  187. if self.is_logist:
  188. # logist判断是否大于0
  189. preds = torch.tensor((outputs >= 0), dtype=torch.float32)
  190. else:
  191. # 如果不是logist,判断每个概率值是否大于0.5,当大于0.5时,类别为1,否则类别为0
  192. preds = torch.tensor((outputs >= 0.5), dtype=torch.float32)
  193. else:
  194. # 多分类时,使用'torch.argmax'计算最大元素索引作为类别
  195. preds = torch.argmax(outputs, dim=1)
  196. # 获取本批数据中预测正确的样本个数
  197. labels = torch.squeeze(labels, dim=-1)
  198. batch_correct = torch.sum(torch.tensor(preds == labels, dtype=torch.float32)).cpu().numpy()
  199. batch_count = len(labels)
  200. # 更新num_correct 和 num_count
  201. self.num_correct += batch_correct
  202. self.num_count += batch_count
  203. def accumulate(self):
  204. # 使用累计的数据,计算总的指标
  205. if self.num_count == 0:
  206. return 0
  207. return self.num_correct / self.num_count
  208. def reset(self):
  209. # 重置正确的数目和总数
  210. self.num_correct = 0
  211. self.num_count = 0
  212. def name(self):
  213. return "Accuracy"

lstm.py

  1. import os
  2. import torch
  3. import torch.nn as nn
  4. from torch.utils.data import Dataset
  5. from utils.data import load_vocab
  6. from functools import partial
  7. import time
  8. import random
  9. import numpy as np
  10. from nndl import Accuracy, RunnerV3
  11. device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
  12. # 加载数据集
  13. def load_imdb_data(path):
  14. assert os.path.exists(path)
  15. trainset, devset, testset = [], [], []
  16. with open(os.path.join(path, "train.txt"), "r", encoding='utf-8') as fr:
  17. for line in fr:
  18. sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
  19. trainset.append((sentence, sentence_label))
  20. with open(os.path.join(path, "dev.txt"), "r", encoding='utf-8') as fr:
  21. for line in fr:
  22. sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
  23. devset.append((sentence, sentence_label))
  24. with open(os.path.join(path, "test.txt"), "r", encoding='utf-8') as fr:
  25. for line in fr:
  26. sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
  27. testset.append((sentence, sentence_label))
  28. return trainset, devset, testset
  29. # 加载IMDB数据集
  30. train_data, dev_data, test_data = load_imdb_data("./dataset/")
  31. # # 打印一下加载后的数据样式
  32. print(train_data[4])
  33. class IMDBDataset(Dataset):
  34. def __init__(self, examples, word2id_dict):
  35. super(IMDBDataset, self).__init__()
  36. # 词典,用于将单词转为字典索引的数字
  37. self.word2id_dict = word2id_dict
  38. # 加载后的数据集
  39. self.examples = self.words_to_id(examples)
  40. def words_to_id(self, examples):
  41. tmp_examples = []
  42. for idx, example in enumerate(examples):
  43. seq, label = example
  44. # 将单词映射为字典索引的ID, 对于词典中没有的单词用[UNK]对应的ID进行替代
  45. seq = [self.word2id_dict.get(word, self.word2id_dict['[UNK]']) for word in seq.split(" ")]
  46. label = int(label)
  47. tmp_examples.append([seq, label])
  48. return tmp_examples
  49. def __getitem__(self, idx):
  50. seq, label = self.examples[idx]
  51. return seq, label
  52. def __len__(self):
  53. return len(self.examples)
  54. # 加载词表
  55. word2id_dict = load_vocab("./dataset/vocab.txt")
  56. # 实例化Dataset
  57. train_set = IMDBDataset(train_data, word2id_dict)
  58. dev_set = IMDBDataset(dev_data, word2id_dict)
  59. test_set = IMDBDataset(test_data, word2id_dict)
  60. print('训练集样本数:', len(train_set))
  61. print('样本示例:', train_set[4])
  62. def collate_fn(batch_data, pad_val=0, max_seq_len=256):
  63. seqs, seq_lens, labels = [], [], []
  64. max_len = 0
  65. for example in batch_data:
  66. seq, label = example
  67. # 对数据序列进行截断
  68. seq = seq[:max_seq_len]
  69. # 对数据截断并保存于seqs中
  70. seqs.append(seq)
  71. seq_lens.append(len(seq))
  72. labels.append(label)
  73. # 保存序列最大长度
  74. max_len = max(max_len, len(seq))
  75. # 对数据序列进行填充至最大长度
  76. for i in range(len(seqs)):
  77. seqs[i] = seqs[i] + [pad_val] * (max_len - len(seqs[i]))
  78. # return (torch.tensor(seqs), torch.tensor(seq_lens)), torch.tensor(labels)
  79. return (torch.tensor(seqs).to(device), torch.tensor(seq_lens)), torch.tensor(labels).to(device)
  80. max_seq_len = 5
  81. batch_data = [[[1, 2, 3, 4, 5, 6], 1], [[2, 4, 6], 0]]
  82. (seqs, seq_lens), labels = collate_fn(batch_data, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
  83. print("seqs: ", seqs)
  84. print("seq_lens: ", seq_lens)
  85. print("labels: ", labels)
  86. max_seq_len = 256
  87. batch_size = 128
  88. collate_fn = partial(collate_fn, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
  89. train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size,
  90. shuffle=True, drop_last=False, collate_fn=collate_fn)
  91. dev_loader = torch.utils.data.DataLoader(dev_set, batch_size=batch_size,
  92. shuffle=False, drop_last=False, collate_fn=collate_fn)
  93. test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size,
  94. shuffle=False, drop_last=False, collate_fn=collate_fn)
  95. class AveragePooling(nn.Module):
  96. def __init__(self):
  97. super(AveragePooling, self).__init__()
  98. def forward(self, sequence_output, sequence_length):
  99. # 假设 sequence_length 是一个 PyTorch 张量
  100. sequence_length = sequence_length.unsqueeze(-1).to(torch.float32)
  101. # 根据sequence_length生成mask矩阵,用于对Padding位置的信息进行mask
  102. max_len = sequence_output.shape[1]
  103. mask = torch.arange(max_len, device='cuda') < sequence_length.to('cuda')
  104. mask = mask.to(torch.float32).unsqueeze(-1)
  105. # 对序列中paddling部分进行mask
  106. sequence_output = torch.multiply(sequence_output, mask.to('cuda'))
  107. # 对序列中的向量取均值
  108. batch_mean_hidden = torch.divide(torch.sum(sequence_output, dim=1), sequence_length.to('cuda'))
  109. return batch_mean_hidden
  110. class Model_BiLSTM_FC(nn.Module):
  111. def __init__(self, num_embeddings, input_size, hidden_size, num_classes=2):
  112. super(Model_BiLSTM_FC, self).__init__()
  113. # 词典大小
  114. self.num_embeddings = num_embeddings
  115. # 单词向量的维度
  116. self.input_size = input_size
  117. # LSTM隐藏单元数量
  118. self.hidden_size = hidden_size
  119. # 情感分类类别数量
  120. self.num_classes = num_classes
  121. # 实例化嵌入层
  122. self.embedding_layer = nn.Embedding(num_embeddings, input_size, padding_idx=0)
  123. # 实例化LSTM层
  124. self.lstm_layer = nn.LSTM(input_size, hidden_size, batch_first=True, bidirectional=True)
  125. # 实例化聚合层
  126. self.average_layer = AveragePooling()
  127. # 实例化输出层
  128. self.output_layer = nn.Linear(hidden_size * 2, num_classes)
  129. def forward(self, inputs):
  130. # 对模型输入拆分为序列数据和mask
  131. input_ids, sequence_length = inputs
  132. # 获取词向量
  133. inputs_emb = self.embedding_layer(input_ids)
  134. packed_input = nn.utils.rnn.pack_padded_sequence(inputs_emb, sequence_length.cpu(), batch_first=True,
  135. enforce_sorted=False)
  136. # 使用lstm处理数据
  137. packed_output, _ = self.lstm_layer(packed_input)
  138. # 解包输出
  139. sequence_output, _ = nn.utils.rnn.pad_packed_sequence(packed_output, batch_first=True)
  140. # 使用聚合层聚合sequence_output
  141. batch_mean_hidden = self.average_layer(sequence_output, sequence_length)
  142. # 输出文本分类logits
  143. logits = self.output_layer(batch_mean_hidden)
  144. return logits
  145. np.random.seed(0)
  146. random.seed(0)
  147. torch.seed()
  148. # 指定训练轮次
  149. num_epochs = 3
  150. # 指定学习率
  151. learning_rate = 0.001
  152. # 指定embedding的数量为词表长度
  153. num_embeddings = len(word2id_dict)
  154. # embedding向量的维度
  155. input_size = 256
  156. # LSTM网络隐状态向量的维度
  157. hidden_size = 256
  158. # 实例化模型
  159. model = Model_BiLSTM_FC(num_embeddings, input_size, hidden_size).to(device)
  160. # 指定优化器
  161. optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.999))
  162. # 指定损失函数
  163. loss_fn = nn.CrossEntropyLoss()
  164. # 指定评估指标
  165. metric = Accuracy()
  166. # 实例化Runner
  167. runner = RunnerV3(model, optimizer, loss_fn, metric)
  168. # 模型训练
  169. start_time = time.time()
  170. runner.train(train_loader, dev_loader, num_epochs=num_epochs, eval_steps=10, log_steps=10, save_path="./checkpoints/best.pdparams")
  171. end_time = time.time()
  172. print("time: ", (end_time-start_time))
  173. from nndl import plot_training_loss_acc
  174. # 图像名字
  175. fig_name = "./images/6.16.pdf"
  176. # sample_step: 训练损失的采样step,即每隔多少个点选择1个点绘制
  177. # loss_legend_loc: loss 图像的图例放置位置
  178. # acc_legend_loc: acc 图像的图例放置位置
  179. plot_training_loss_acc(runner, fig_name, fig_size=(16,6), sample_step=10, loss_legend_loc="lower left", acc_legend_loc="lower right")
  180. model_path = "./checkpoints/best.pdparams"
  181. runner.load_model(model_path)
  182. accuracy, _ = runner.evaluate(test_loader)
  183. print(f"Evaluate on test set, Accuracy: {accuracy:.5f}")
  184. id2label={0:"消极情绪", 1:"积极情绪"}
  185. text = "this movie is so great. I watched it three times already"
  186. # 处理单条文本
  187. sentence = text.split(" ")
  188. words = [word2id_dict[word] if word in word2id_dict else word2id_dict['[UNK]'] for word in sentence]
  189. words = words[:max_seq_len]
  190. sequence_length = torch.tensor([len(words)], dtype=torch.int64)
  191. words = torch.tensor(words, dtype=torch.int64).unsqueeze(0)
  192. # 使用模型进行预测
  193. logits = runner.predict((words.to(device), sequence_length.to(device)))
  194. max_label_id = torch.argmax(logits, dim=-1).cpu().numpy()[0]
  195. pred_label = id2label[max_label_id]
  196. print("Label: ", pred_label)

extend_1.py(展现可以不这么多,自行取消图片的显示)

  1. import os
  2. import torch
  3. import torch.nn as nn
  4. from torch.utils.data import Dataset
  5. from utils.data import load_vocab
  6. from functools import partial
  7. import time
  8. import random
  9. import numpy as np
  10. from nndl import Accuracy, RunnerV3
  11. device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
  12. # 加载数据集
  13. def load_imdb_data(path):
  14. assert os.path.exists(path)
  15. trainset, devset, testset = [], [], []
  16. with open(os.path.join(path, "train.txt"), "r", encoding='utf-8') as fr:
  17. for line in fr:
  18. sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
  19. trainset.append((sentence, sentence_label))
  20. with open(os.path.join(path, "dev.txt"), "r", encoding='utf-8') as fr:
  21. for line in fr:
  22. sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
  23. devset.append((sentence, sentence_label))
  24. with open(os.path.join(path, "test.txt"), "r", encoding='utf-8') as fr:
  25. for line in fr:
  26. sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
  27. testset.append((sentence, sentence_label))
  28. return trainset, devset, testset
  29. # 加载IMDB数据集
  30. train_data, dev_data, test_data = load_imdb_data("./dataset/")
  31. # # 打印一下加载后的数据样式
  32. # print(train_data[4])
  33. class IMDBDataset(Dataset):
  34. def __init__(self, examples, word2id_dict):
  35. super(IMDBDataset, self).__init__()
  36. # 词典,用于将单词转为字典索引的数字
  37. self.word2id_dict = word2id_dict
  38. # 加载后的数据集
  39. self.examples = self.words_to_id(examples)
  40. def words_to_id(self, examples):
  41. tmp_examples = []
  42. for idx, example in enumerate(examples):
  43. seq, label = example
  44. # 将单词映射为字典索引的ID, 对于词典中没有的单词用[UNK]对应的ID进行替代
  45. seq = [self.word2id_dict.get(word, self.word2id_dict['[UNK]']) for word in seq.split(" ")]
  46. label = int(label)
  47. tmp_examples.append([seq, label])
  48. return tmp_examples
  49. def __getitem__(self, idx):
  50. seq, label = self.examples[idx]
  51. return seq, label
  52. def __len__(self):
  53. return len(self.examples)
  54. # 加载词表
  55. word2id_dict = load_vocab("./dataset/vocab.txt")
  56. # 实例化Dataset
  57. train_set = IMDBDataset(train_data, word2id_dict)
  58. dev_set = IMDBDataset(dev_data, word2id_dict)
  59. test_set = IMDBDataset(test_data, word2id_dict)
  60. print('训练集样本数:', len(train_set))
  61. print('样本示例:', train_set[4])
  62. def collate_fn(batch_data, pad_val=0, max_seq_len=256):
  63. seqs, seq_lens, labels = [], [], []
  64. max_len = 0
  65. for example in batch_data:
  66. seq, label = example
  67. # 对数据序列进行截断
  68. seq = seq[:max_seq_len]
  69. # 对数据截断并保存于seqs中
  70. seqs.append(seq)
  71. seq_lens.append(len(seq))
  72. labels.append(label)
  73. # 保存序列最大长度
  74. max_len = max(max_len, len(seq))
  75. # 对数据序列进行填充至最大长度
  76. for i in range(len(seqs)):
  77. seqs[i] = seqs[i] + [pad_val] * (max_len - len(seqs[i]))
  78. # return (torch.tensor(seqs), torch.tensor(seq_lens)), torch.tensor(labels)
  79. return (torch.tensor(seqs).to(device), torch.tensor(seq_lens)), torch.tensor(labels).to(device)
  80. max_seq_len = 5
  81. batch_data = [[[1, 2, 3, 4, 5, 6], 1], [[2, 4, 6], 0]]
  82. (seqs, seq_lens), labels = collate_fn(batch_data, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
  83. print("seqs: ", seqs)
  84. print("seq_lens: ", seq_lens)
  85. print("labels: ", labels)
  86. max_seq_len = 256
  87. batch_size = 128
  88. collate_fn = partial(collate_fn, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
  89. train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size,
  90. shuffle=True, drop_last=False, collate_fn=collate_fn)
  91. dev_loader = torch.utils.data.DataLoader(dev_set, batch_size=batch_size,
  92. shuffle=False, drop_last=False, collate_fn=collate_fn)
  93. test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size,
  94. shuffle=False, drop_last=False, collate_fn=collate_fn)
  95. class AveragePooling(nn.Module):
  96. def __init__(self):
  97. super(AveragePooling, self).__init__()
  98. def forward(self, sequence_output, sequence_length):
  99. # 假设 sequence_length 是一个 PyTorch 张量
  100. sequence_length = sequence_length.unsqueeze(-1).to(torch.float32)
  101. # 根据sequence_length生成mask矩阵,用于对Padding位置的信息进行mask
  102. max_len = sequence_output.shape[1]
  103. mask = torch.arange(max_len, device='cuda') < sequence_length.to('cuda')
  104. mask = mask.to(torch.float32).unsqueeze(-1)
  105. # 对序列中paddling部分进行mask
  106. sequence_output = torch.multiply(sequence_output, mask.to('cuda'))
  107. # 对序列中的向量取均值
  108. batch_mean_hidden = torch.divide(torch.sum(sequence_output, dim=1), sequence_length.to('cuda'))
  109. return batch_mean_hidden
  110. class Model_BiLSTM_FC(nn.Module):
  111. def __init__(self, num_embeddings, input_size, hidden_size, num_classes=2):
  112. super(Model_BiLSTM_FC, self).__init__()
  113. # 词典大小
  114. self.num_embeddings = num_embeddings
  115. # 单词向量的维度
  116. self.input_size = input_size
  117. # LSTM隐藏单元数量
  118. self.hidden_size = hidden_size
  119. # 情感分类类别数量
  120. self.num_classes = num_classes
  121. # 实例化嵌入层
  122. self.embedding_layer = nn.Embedding(num_embeddings, input_size, padding_idx=0)
  123. # 实例化LSTM层
  124. self.lstm_layer = nn.LSTM(input_size, hidden_size, batch_first=True)
  125. # 实例化聚合层
  126. self.average_layer = AveragePooling()
  127. # 实例化输出层
  128. self.output_layer = nn.Linear(hidden_size, num_classes)
  129. def forward(self, inputs):
  130. # 对模型输入拆分为序列数据和mask
  131. input_ids, sequence_length = inputs
  132. # 获取词向量
  133. inputs_emb = self.embedding_layer(input_ids)
  134. packed_input = nn.utils.rnn.pack_padded_sequence(inputs_emb, sequence_length.cpu(), batch_first=True,
  135. enforce_sorted=False)
  136. # 使用lstm处理数据
  137. packed_output, _ = self.lstm_layer(packed_input)
  138. # 解包输出
  139. sequence_output, _ = nn.utils.rnn.pad_packed_sequence(packed_output, batch_first=True)
  140. # 使用聚合层聚合sequence_output
  141. batch_mean_hidden = self.average_layer(sequence_output, sequence_length)
  142. # 输出文本分类logits
  143. logits = self.output_layer(batch_mean_hidden)
  144. return logits
  145. np.random.seed(0)
  146. random.seed(0)
  147. torch.seed()
  148. # 指定训练轮次
  149. num_epochs = 3
  150. # 指定学习率
  151. learning_rate = 0.001
  152. # 指定embedding的数量为词表长度
  153. num_embeddings = len(word2id_dict)
  154. # embedding向量的维度
  155. input_size = 256
  156. # LSTM网络隐状态向量的维度
  157. hidden_size = 256
  158. # 实例化模型
  159. model = Model_BiLSTM_FC(num_embeddings, input_size, hidden_size).to(device)
  160. # 指定优化器
  161. optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.999))
  162. # 指定损失函数
  163. loss_fn = nn.CrossEntropyLoss()
  164. # 指定评估指标
  165. metric = Accuracy()
  166. # 实例化Runner
  167. runner = RunnerV3(model, optimizer, loss_fn, metric)
  168. # 模型训练
  169. start_time = time.time()
  170. runner.train(train_loader, dev_loader, num_epochs=num_epochs, eval_steps=10, log_steps=10, save_path="./checkpoints/best_forward.pdparams")
  171. end_time = time.time()
  172. print("time: ", (end_time-start_time))
  173. from nndl import plot_training_loss_acc
  174. # 图像名字
  175. fig_name = "./images/6.16.pdf"
  176. # sample_step: 训练损失的采样step,即每隔多少个点选择1个点绘制
  177. # loss_legend_loc: loss 图像的图例放置位置
  178. # acc_legend_loc: acc 图像的图例放置位置
  179. plot_training_loss_acc(runner, fig_name, fig_size=(16,6), sample_step=10, loss_legend_loc="lower left", acc_legend_loc="lower right")
  180. model_path = "./checkpoints/best_forward.pdparams"
  181. runner.load_model(model_path)
  182. accuracy, _ = runner.evaluate(test_loader)
  183. print(f"Evaluate on test set, Accuracy: {accuracy:.5f}")
  184. id2label={0:"消极情绪", 1:"积极情绪"}
  185. text = "this movie is so great. I watched it three times already"
  186. # 处理单条文本
  187. sentence = text.split(" ")
  188. words = [word2id_dict[word] if word in word2id_dict else word2id_dict['[UNK]'] for word in sentence]
  189. words = words[:max_seq_len]
  190. sequence_length = torch.tensor([len(words)], dtype=torch.int64)
  191. words = torch.tensor(words, dtype=torch.int64).unsqueeze(0)
  192. # 使用模型进行预测
  193. logits = runner.predict((words.to(device), sequence_length.to(device)))
  194. max_label_id = torch.argmax(logits, dim=-1).cpu().numpy()[0]
  195. pred_label = id2label[max_label_id]
  196. print("Label: ", pred_label)

extend2.py

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. import time
  5. import random
  6. import numpy as np
  7. from nndl import Accuracy, RunnerV3
  8. from functools import partial
  9. import os
  10. from torch.utils.data import Dataset
  11. from utils.data import load_vocab
  12. def load_imdb_data(path):
  13. assert os.path.exists(path)
  14. trainset, devset, testset = [], [], []
  15. with open(os.path.join(path, "train.txt"), "r", encoding='utf-8') as fr:
  16. for line in fr:
  17. sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
  18. trainset.append((sentence, sentence_label))
  19. with open(os.path.join(path, "dev.txt"), "r", encoding='utf-8') as fr:
  20. for line in fr:
  21. sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
  22. devset.append((sentence, sentence_label))
  23. with open(os.path.join(path, "test.txt"), "r", encoding='utf-8') as fr:
  24. for line in fr:
  25. sentence_label, sentence = line.strip().lower().split("\t", maxsplit=1)
  26. testset.append((sentence, sentence_label))
  27. return trainset, devset, testset
  28. # 加载IMDB数据集
  29. train_data, dev_data, test_data = load_imdb_data("./dataset/")
  30. # # 打印一下加载后的数据样式
  31. # print(train_data[4])
  32. class IMDBDataset(Dataset):
  33. def __init__(self, examples, word2id_dict):
  34. super(IMDBDataset, self).__init__()
  35. # 词典,用于将单词转为字典索引的数字
  36. self.word2id_dict = word2id_dict
  37. # 加载后的数据集
  38. self.examples = self.words_to_id(examples)
  39. def words_to_id(self, examples):
  40. tmp_examples = []
  41. for idx, example in enumerate(examples):
  42. seq, label = example
  43. # 将单词映射为字典索引的ID, 对于词典中没有的单词用[UNK]对应的ID进行替代
  44. seq = [self.word2id_dict.get(word, self.word2id_dict['[UNK]']) for word in seq.split(" ")]
  45. label = int(label)
  46. tmp_examples.append([seq, label])
  47. return tmp_examples
  48. def __getitem__(self, idx):
  49. seq, label = self.examples[idx]
  50. return seq, label
  51. def __len__(self):
  52. return len(self.examples)
  53. # 加载词表
  54. word2id_dict = load_vocab("./dataset/vocab.txt")
  55. # 实例化Dataset
  56. train_set = IMDBDataset(train_data, word2id_dict)
  57. dev_set = IMDBDataset(dev_data, word2id_dict)
  58. test_set = IMDBDataset(test_data, word2id_dict)
  59. print('训练集样本数:', len(train_set))
  60. print('样本示例:', train_set[4])
  61. def collate_fn(batch_data, pad_val=0, max_seq_len=256):
  62. seqs, seq_lens, labels = [], [], []
  63. max_len = 0
  64. for example in batch_data:
  65. seq, label = example
  66. # 对数据序列进行截断
  67. seq = seq[:max_seq_len]
  68. # 对数据截断并保存于seqs中
  69. seqs.append(seq)
  70. seq_lens.append(len(seq))
  71. labels.append(label)
  72. # 保存序列最大长度
  73. max_len = max(max_len, len(seq))
  74. # 对数据序列进行填充至最大长度
  75. for i in range(len(seqs)):
  76. seqs[i] = seqs[i] + [pad_val] * (max_len - len(seqs[i]))
  77. # return (torch.tensor(seqs), torch.tensor(seq_lens)), torch.tensor(labels)
  78. return (torch.tensor(seqs).to('cuda'), torch.tensor(seq_lens)), torch.tensor(labels).to('cuda')
  79. max_seq_len = 5
  80. batch_data = [[[1, 2, 3, 4, 5, 6], 1], [[2, 4, 6], 0]]
  81. (seqs, seq_lens), labels = collate_fn(batch_data, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
  82. print("seqs: ", seqs)
  83. print("seq_lens: ", seq_lens)
  84. print("labels: ", labels)
  85. max_seq_len = 256
  86. batch_size = 128
  87. collate_fn = partial(collate_fn, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
  88. train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size,
  89. shuffle=True, drop_last=False, collate_fn=collate_fn)
  90. dev_loader = torch.utils.data.DataLoader(dev_set, batch_size=batch_size,
  91. shuffle=False, drop_last=False, collate_fn=collate_fn)
  92. test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size,
  93. shuffle=False, drop_last=False, collate_fn=collate_fn)
  94. class LSTM(nn.Module):
  95. def __init__(self, input_size, hidden_size, Wi_attr=None, Wf_attr=None, Wo_attr=None, Wc_attr=None,
  96. Ui_attr=None, Uf_attr=None, Uo_attr=None, Uc_attr=None, bi_attr=None, bf_attr=None,
  97. bo_attr=None, bc_attr=None):
  98. super(LSTM, self).__init__()
  99. self.input_size = input_size
  100. self.hidden_size = hidden_size
  101. # 初始化模型参数
  102. if Wi_attr is None:
  103. Wi = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
  104. else:
  105. Wi = torch.tensor(Wi_attr, dtype=torch.float32)
  106. self.W_i = torch.nn.Parameter(Wi.to('cuda'))
  107. if Wf_attr is None:
  108. Wf = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
  109. else:
  110. Wf = torch.tensor(Wf_attr, dtype=torch.float32)
  111. self.W_f = torch.nn.Parameter(Wf.to('cuda'))
  112. if Wo_attr is None:
  113. Wo = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
  114. else:
  115. Wo = torch.tensor(Wo_attr, dtype=torch.float32)
  116. self.W_o = torch.nn.Parameter(Wo.to('cuda'))
  117. if Wc_attr is None:
  118. Wc = torch.zeros(size=[input_size, hidden_size], dtype=torch.float32)
  119. else:
  120. Wc = torch.tensor(Wc_attr, dtype=torch.float32)
  121. self.W_c = torch.nn.Parameter(Wc.to('cuda'))
  122. if Ui_attr is None:
  123. Ui = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
  124. else:
  125. Ui = torch.tensor(Ui_attr, dtype=torch.float32)
  126. self.U_i = torch.nn.Parameter(Ui.to('cuda'))
  127. if Uf_attr is None:
  128. Uf = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
  129. else:
  130. Uf = torch.tensor(Uf_attr, dtype=torch.float32)
  131. self.U_f = torch.nn.Parameter(Uf.to('cuda'))
  132. if Uo_attr is None:
  133. Uo = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
  134. else:
  135. Uo = torch.tensor(Uo_attr, dtype=torch.float32)
  136. self.U_o = torch.nn.Parameter(Uo.to('cuda'))
  137. if Uc_attr is None:
  138. Uc = torch.zeros(size=[hidden_size, hidden_size], dtype=torch.float32)
  139. else:
  140. Uc = torch.tensor(Uc_attr, dtype=torch.float32)
  141. self.U_c = torch.nn.Parameter(Uc.to('cuda'))
  142. if bi_attr is None:
  143. bi = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
  144. else:
  145. bi = torch.tensor(bi_attr, dtype=torch.float32)
  146. self.b_i = torch.nn.Parameter(bi.to('cuda'))
  147. if bf_attr is None:
  148. bf = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
  149. else:
  150. bf = torch.tensor(bf_attr, dtype=torch.float32)
  151. self.b_f = torch.nn.Parameter(bf.to('cuda'))
  152. if bo_attr is None:
  153. bo = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
  154. else:
  155. bo = torch.tensor(bo_attr, dtype=torch.float32)
  156. self.b_o = torch.nn.Parameter(bo.to('cuda'))
  157. if bc_attr is None:
  158. bc = torch.zeros(size=[1, hidden_size], dtype=torch.float32)
  159. else:
  160. bc = torch.tensor(bc_attr, dtype=torch.float32)
  161. self.b_c = torch.nn.Parameter(bc.to('cuda'))
  162. # 初始化状态向量和隐状态向量
  163. def init_state(self, batch_size):
  164. hidden_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32).to('cuda')
  165. cell_state = torch.zeros(size=[batch_size, self.hidden_size], dtype=torch.float32).to('cuda')
  166. return hidden_state, cell_state
  167. # 定义前向计算
  168. def forward(self, inputs, states=None):
  169. # inputs: 输入数据,其shape为batch_size x seq_len x input_size
  170. batch_size, seq_len, input_size = torch.tensor(inputs).shape
  171. # 初始化起始的单元状态和隐状态向量,其shape为batch_size x hidden_size
  172. if states is None:
  173. states = self.init_state(batch_size)
  174. hidden_state, cell_state = states
  175. # 执行LSTM计算,包括:输入门、遗忘门和输出门、候选内部状态、内部状态和隐状态向量
  176. for step in range(seq_len):
  177. # 获取当前时刻的输入数据step_input: 其shape为batch_size x input_size
  178. step_input = inputs[:, step, :]
  179. # 计算输入门, 遗忘门和输出门, 其shape为:batch_size x hidden_size
  180. I_gate = F.sigmoid(torch.matmul(step_input, self.W_i) + torch.matmul(hidden_state, self.U_i) + self.b_i)
  181. F_gate = F.sigmoid(torch.matmul(step_input, self.W_f) + torch.matmul(hidden_state, self.U_f) + self.b_f)
  182. O_gate = F.sigmoid(torch.matmul(step_input, self.W_o) + torch.matmul(hidden_state, self.U_o) + self.b_o)
  183. # 计算候选状态向量, 其shape为:batch_size x hidden_size
  184. C_tilde = F.tanh(torch.matmul(step_input, self.W_c) + torch.matmul(hidden_state, self.U_c) + self.b_c)
  185. # 计算单元状态向量, 其shape为:batch_size x hidden_size
  186. cell_state = F_gate * cell_state + I_gate * C_tilde
  187. # 计算隐状态向量,其shape为:batch_size x hidden_size
  188. hidden_state = O_gate * F.tanh(cell_state)
  189. return hidden_state
  190. class AveragePooling(nn.Module):
  191. def __init__(self):
  192. super(AveragePooling, self).__init__()
  193. def forward(self, sequence_output, sequence_length):
  194. # 假设 sequence_length 是一个 PyTorch 张量
  195. sequence_length = sequence_length.unsqueeze(-1).to(torch.float32)
  196. # 根据sequence_length生成mask矩阵,用于对Padding位置的信息进行mask
  197. max_len = sequence_output.shape[1]
  198. mask = torch.arange(max_len, device='cuda') < sequence_length.to('cuda')
  199. mask = mask.to(torch.float32).unsqueeze(-1)
  200. # 对序列中paddling部分进行mask
  201. sequence_output = torch.multiply(sequence_output, mask.to('cuda'))
  202. # 对序列中的向量取均值
  203. batch_mean_hidden = torch.divide(torch.sum(sequence_output, dim=1), sequence_length.to('cuda'))
  204. return batch_mean_hidden
  205. class Model_BiLSTM_FC(nn.Module):
  206. def __init__(self, num_embeddings, input_size, hidden_size, num_classes=2):
  207. super(Model_BiLSTM_FC, self).__init__()
  208. # 词典大小
  209. self.num_embeddings = num_embeddings
  210. # 单词向量的维度
  211. self.input_size = input_size
  212. # LSTM隐藏单元数量
  213. self.hidden_size = hidden_size
  214. # 情感分类类别数量
  215. self.num_classes = num_classes
  216. # 实例化嵌入层
  217. self.embedding_layer = nn.Embedding(num_embeddings, input_size, padding_idx=0)
  218. # 实例化LSTM层
  219. self.lstm_layer = nn.LSTM(input_size, hidden_size, batch_first=True, bidirectional=True)
  220. # 实例化聚合层
  221. self.average_layer = AveragePooling()
  222. # 实例化输出层
  223. self.output_layer = nn.Linear(hidden_size * 2, num_classes)
  224. def forward(self, inputs):
  225. # 对模型输入拆分为序列数据和mask
  226. input_ids, sequence_length = inputs
  227. # 获取词向量
  228. inputs_emb = self.embedding_layer(input_ids)
  229. packed_input = nn.utils.rnn.pack_padded_sequence(inputs_emb, sequence_length.cpu(), batch_first=True,
  230. enforce_sorted=False)
  231. # 使用lstm处理数据
  232. packed_output, _ = self.lstm_layer(packed_input)
  233. # 解包输出
  234. sequence_output, _ = nn.utils.rnn.pad_packed_sequence(packed_output, batch_first=True)
  235. # 使用聚合层聚合sequence_output
  236. batch_mean_hidden = self.average_layer(sequence_output, sequence_length)
  237. # 输出文本分类logits
  238. logits = self.output_layer(batch_mean_hidden)
  239. return logits
  240. np.random.seed(0)
  241. random.seed(0)
  242. torch.seed()
  243. # 指定训练轮次
  244. num_epochs = 3
  245. # 指定学习率
  246. learning_rate = 0.001
  247. # 指定embedding的数量为词表长度
  248. num_embeddings = len(word2id_dict)
  249. # embedding向量的维度
  250. input_size = 256
  251. # LSTM网络隐状态向量的维度
  252. hidden_size = 256
  253. # 实例化模型
  254. model = Model_BiLSTM_FC(num_embeddings, input_size, hidden_size).to('cuda')
  255. # 指定优化器
  256. optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.999))
  257. # 指定损失函数
  258. loss_fn = nn.CrossEntropyLoss()
  259. # 指定评估指标
  260. metric = Accuracy()
  261. # 实例化Runner
  262. runner = RunnerV3(model, optimizer, loss_fn, metric)
  263. # 模型训练
  264. start_time = time.time()
  265. runner.train(train_loader, dev_loader, num_epochs=num_epochs, eval_steps=10, log_steps=10, save_path="./checkpoints/best_self_forward.pdparams")
  266. end_time = time.time()
  267. print("time: ", (end_time-start_time))
  268. model_path = "./checkpoints/best_self_forward.pdparams"
  269. runner.load_model(model_path)
  270. accuracy, _ = runner.evaluate(test_loader)
  271. print(f"Evaluate on test set, Accuracy: {accuracy:.5f}")

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/羊村懒王/article/detail/421818
推荐阅读
相关标签
  

闽ICP备14008679号