当前位置:   article > 正文

RNN(pytorch)的维度问题——用GRU实现文本分类(参考刘二大人)_rnn文本分类具体代码

rnn文本分类具体代码

最近学RNN的时候对于其中各种输入输出都有点懵,然后参考了刘二大人关于pytorch实践以及下面这篇文章Pytorch深度学习实践(b站刘二大人)P13讲 (RNN循环神经网络高级篇)_努力学习的朱朱的博客-CSDN博客代码把维度问题整理了一下

一、pytorch中embedding在做什么?

我们先看下面这一段代码:

  1. import torch
  2. embedding = torch.nn.Embedding(10, 3)
  3. print(embedding.weight) # 根据索引取embedding中的词向量
  4. input = torch.LongTensor([[0,2,4,5],[4,3,2,0]])
  5. output = embedding(input)
  6. print(output)
  7. print(output.shape)

 这段代码输出如下:

  1. Parameter containing:
  2. tensor([[ 0.1540, -0.8776, -0.9737],
  3. [-2.0864, -1.1387, -1.9999],
  4. [ 0.3297, 1.2760, 0.4246],
  5. [-0.4424, 1.0758, -1.3849],
  6. [ 0.6420, -2.5247, -1.1060],
  7. [ 1.0529, 1.3949, -1.0098],
  8. [ 1.1634, -1.1316, -0.1378],
  9. [ 1.3910, 0.9718, 0.1931],
  10. [-1.9672, -0.5770, 1.0776],
  11. [-0.4043, -0.9368, 3.2478]], requires_grad=True)
  12. tensor([[[ 0.1540, -0.8776, -0.9737],
  13. [ 0.3297, 1.2760, 0.4246],
  14. [ 0.6420, -2.5247, -1.1060],
  15. [ 1.0529, 1.3949, -1.0098]],
  16. [[ 0.6420, -2.5247, -1.1060],
  17. [-0.4424, 1.0758, -1.3849],
  18. [ 0.3297, 1.2760, 0.4246],
  19. [ 0.1540, -0.8776, -0.9737]]], grad_fn=<EmbeddingBackward0>)
  20. torch.Size([2, 4, 3])

我们可以看到embedding就是跟我我们设置的参数(词典大小vocab_size,词嵌入向量大小embedding_dim)随机生成一个对应维度的向量矩阵,而nn.embedding就相当于根据对应input的index在这个矩阵中取向量,比如input中[0,2,4,5]就对应了向量矩阵中的第0行,第2行,第4行以及第5行。在这个过程中,输入维度是【batch_size, seq_len】(批量大小,句子长度),输出维度是【batch_size, seq_len, embedding_size】。注意,在实际运用过程中输入【batch_size, seq_len】要进行转置再输入,这里写的是没有经过转置的对应维度,仅为了理解embedding的作用。

二、batch_size与seq_len

在TensorFlow中有专门的seq_len来对应于句子长度,但是pytorch中参数如下:

  1. class torch.nn.LSTM(*args, **kwargs) :
  2. input_size:x的特征维度
  3. hidden_size:隐藏层的特征维度
  4. num_layers:lstm隐层的层数,默认为1
  5. bias:False则bihbih=0和bhhbhh=0. 默认为True
  6. batch_firstTrue则输入输出的数据格式为 (batch, seq, feature)
  7. dropout:除最后一层,每一层的输出都进行dropout,默认为: 0
  8. bidirectional:True则为双向lstm默认为False

显然是没有seq_len,那么在pytorch中我们是怎么解决这个问题的呢?这就需要我们自己在数据输入LSTM前将数据进行padding为同一纬度的向量,对于pytorch中LSTM来说,只要保证每一个batch中的seq_len相同即可。一般来说我们有两种方法:

1.自己构造DataLoader:将每个batch_size中的句子都填充为该batch中最长句子的长度(该方法后面讲)

2.用pytorch中的torch.utils.data.Data.TensorDataset和torch.utils.data.DataLoader,使用这个就需要在最开始就把句子都padding为同一长度,用下面方法:

  1. def truncate_pad(line, num_steps, padding_token):
  2. """截断或填充文本序列"""
  3. if len(line) > num_steps:
  4. return line[:num_steps] # 截断
  5. return line + [padding_token] * (num_steps - len(line)) # 填充
  6. num_steps = 100 #设置padding维度
  7. train_features = torch.tensor([truncate_pad(
  8. vocab[line], num_steps, vocab['<pad>']) for line in tokens])
  9. train_dataset = Data.TensorDataset(X_train, torch.tensor(y_train)) #构造为Dataset
  10. trainloader = DataLoader(train_dataset, batch_size=BATCH_SIZE,shuffle=True) #构造为DataLoader

三、RNN模型的维度

下面就用刘二大人pytorch教学(《PyTorch深度学习实践》完结合集_哔哩哔哩_bilibili)中的例子来看看数据在LSTM中流转过程各个维度是怎样的

  1. import torch
  2. import random
  3. import time
  4. import csv
  5. import gzip
  6. from torch.utils.data import DataLoader
  7. import datetime
  8. import matplotlib.pyplot as plt
  9. import numpy as np
  10. # Parameters
  11. HIDDEN_SIZE = 100
  12. BATCH_SIZE = 32
  13. N_LAYER = 2
  14. N_EPOCHS = 20
  15. N_CHARS = 128
  16. USE_GPU = False
  17. class NameDataset(): #处理数据集
  18. def __init__(self, is_train_set=True):
  19. filename = 'names_train.csv.gz' if is_train_set else 'names_test.csv.gz'
  20. with gzip.open(filename, 'rt') as f:
  21. reader = csv.reader(f)
  22. rows = list(reader)
  23. random.shuffle(rows)
  24. rows = rows[:256]
  25. self.names = [row[0] for row in rows] #取出人名
  26. self.len = len(self.names) #人名数量
  27. self.countries = [row[1] for row in rows]#取出国家名
  28. self.country_list = list(sorted(set(self.countries)))#国家名集合,18个国家名的集合
  29. #countrys是所有国家名,set(countrys)把所有国家明元素设为集合(去除重复项),sorted()函数是将集合排序
  30. #测试了一下,实际list(sorted(set(self.countrys)))==sorted(set(self.countrys))
  31. self.country_dict = self.getCountryDict()#转变成词典
  32. self.country_num = len(self.country_list)#得到国家集合的长度18
  33. def __getitem__(self, index):
  34. return self.names[index], self.country_dict[self.countries[index]]
  35. def __len__(self):
  36. return self.len
  37. def getCountryDict(self):
  38. country_dict = dict() #创建空字典
  39. for idx, country_name in enumerate(self.country_list,0): #取出序号和对应国家名
  40. country_dict[country_name] = idx #把对应的国家名和序号存入字典
  41. return country_dict
  42. def idx2country(self,index): #返回索引对应国家名
  43. return self.country_list(index)
  44. def getCountrysNum(self): #返回国家数量
  45. return self.country_num
  46. trainset = NameDataset(is_train_set=True)
  47. trainloader = DataLoader(trainset, batch_size=BATCH_SIZE,shuffle=True)
  48. testset = NameDataset(is_train_set=False)
  49. testloader = DataLoader(testset, batch_size=BATCH_SIZE,shuffle=False)
  50. for item in trainloader:
  51. print(item)
  52. N_COUNTRY = trainset.getCountrysNum()

为了方便展示,我只选择了其中256来演示,我们可以看一下样本转为dataloader后的样子:

每一个batch由32个名字和对应国家组成,接下俩我们需要将其转变为向量

  1. def name2list(name):
  2. """返回ASCII码表示的姓名列表与列表长度"""
  3. arr = [ord(c) for c in name]
  4. return arr, len(arr)
  5. def make_tensors(names, countries):
  6. sequences_and_lengths = [name2list(name) for name in names]表
  7. name_sequences = [sl[0] for sl in sequences_and_lengths]
  8. seq_lengths = torch.LongTensor([sl[1] for sl in sequences_and_lengths])
  9. countries = countries.long()
  10. seq_tensor = torch.zeros(len(name_sequences), seq_lengths.max()).long()
  11. for idx, (seq, seq_len) in enumerate(zip(name_sequences, seq_lengths), 0):
  12. seq_tensor[idx, :seq_len] = torch.LongTensor(seq)
  13. seq_lengths, perm_idx = seq_lengths.sort(dim=0, descending=True)
  14. seq_tensor = seq_tensor[perm_idx]
  15. countries = countries[perm_idx]
  16. return seq_tensor, seq_lengths, countries

 这段代码做的其实就是把每个batch转化为每个名字长度一致的向量,每一个batch向量维度为:batch*seq_len

我们可以看一下处理后的数据:

  1. for i, (names, countries) in enumerate(trainloader, 1):
  2. print('names',names, 'Coounties',countries)
  3. inputs, seq_lengths, target = make_tensors(names, countries)
  4. print('inputs, seq_lengths, target',inputs, seq_lengths, target)
  5. print('inputs.shape',inputs.shape)
  1. names ('Balawin', 'Likhovtsev', 'Cullen', 'Abadi', 'Uzky', 'Moshnyaga', 'Abrosimov', 'Fencl', 'Antar', 'Pastore', 'Matjeka', 'Larsen', 'Mikhalkov', 'Chavez', 'Agoshkov', 'Hasek', 'Fedotko', 'Koury', 'Winter', 'Hautem', 'Dioli', 'Chershintsev', 'Herbert', 'Anami', 'Makferson', 'Christakos', 'Molnovetsky', 'Tsagareli', 'Hublaryan', 'Matskovsky', 'Radford', 'Antyushin') Coounties tensor([13, 13, 4, 0, 13, 13, 13, 2, 0, 9, 2, 4, 13, 14, 13, 6, 13, 0,
  2. 4, 3, 9, 13, 4, 10, 13, 7, 13, 13, 13, 13, 4, 13])
  3. inputs, seq_lengths, target tensor([[ 67, 104, 101, 114, 115, 104, 105, 110, 116, 115, 101, 118],
  4. [ 77, 111, 108, 110, 111, 118, 101, 116, 115, 107, 121, 0],
  5. [ 76, 105, 107, 104, 111, 118, 116, 115, 101, 118, 0, 0],
  6. [ 67, 104, 114, 105, 115, 116, 97, 107, 111, 115, 0, 0],
  7. [ 77, 97, 116, 115, 107, 111, 118, 115, 107, 121, 0, 0],
  8. [ 77, 111, 115, 104, 110, 121, 97, 103, 97, 0, 0, 0],
  9. [ 65, 98, 114, 111, 115, 105, 109, 111, 118, 0, 0, 0],
  10. [ 77, 105, 107, 104, 97, 108, 107, 111, 118, 0, 0, 0],
  11. [ 77, 97, 107, 102, 101, 114, 115, 111, 110, 0, 0, 0],
  12. [ 84, 115, 97, 103, 97, 114, 101, 108, 105, 0, 0, 0],
  13. [ 72, 117, 98, 108, 97, 114, 121, 97, 110, 0, 0, 0],
  14. [ 65, 110, 116, 121, 117, 115, 104, 105, 110, 0, 0, 0],
  15. [ 65, 103, 111, 115, 104, 107, 111, 118, 0, 0, 0, 0],
  16. [ 66, 97, 108, 97, 119, 105, 110, 0, 0, 0, 0, 0],
  17. [ 80, 97, 115, 116, 111, 114, 101, 0, 0, 0, 0, 0],
  18. [ 77, 97, 116, 106, 101, 107, 97, 0, 0, 0, 0, 0],
  19. [ 70, 101, 100, 111, 116, 107, 111, 0, 0, 0, 0, 0],
  20. [ 72, 101, 114, 98, 101, 114, 116, 0, 0, 0, 0, 0],
  21. [ 82, 97, 100, 102, 111, 114, 100, 0, 0, 0, 0, 0],
  22. [ 67, 117, 108, 108, 101, 110, 0, 0, 0, 0, 0, 0],
  23. [ 76, 97, 114, 115, 101, 110, 0, 0, 0, 0, 0, 0],
  24. [ 67, 104, 97, 118, 101, 122, 0, 0, 0, 0, 0, 0],
  25. [ 87, 105, 110, 116, 101, 114, 0, 0, 0, 0, 0, 0],
  26. [ 72, 97, 117, 116, 101, 109, 0, 0, 0, 0, 0, 0],
  27. [ 65, 98, 97, 100, 105, 0, 0, 0, 0, 0, 0, 0],
  28. [ 70, 101, 110, 99, 108, 0, 0, 0, 0, 0, 0, 0],
  29. [ 65, 110, 116, 97, 114, 0, 0, 0, 0, 0, 0, 0],
  30. [ 72, 97, 115, 101, 107, 0, 0, 0, 0, 0, 0, 0],
  31. [ 75, 111, 117, 114, 121, 0, 0, 0, 0, 0, 0, 0],
  32. [ 68, 105, 111, 108, 105, 0, 0, 0, 0, 0, 0, 0],
  33. [ 65, 110, 97, 109, 105, 0, 0, 0, 0, 0, 0, 0],
  34. [ 85, 122, 107, 121, 0, 0, 0, 0, 0, 0, 0, 0]]) tensor([12, 11, 10, 10, 10, 9, 9, 9, 9, 9, 9, 9, 8, 7, 7, 7, 7, 7,
  35. 7, 6, 6, 6, 6, 6, 5, 5, 5, 5, 5, 5, 5, 4]) tensor([13, 13, 13, 7, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 9, 2, 13, 4,
  36. 4, 4, 4, 14, 4, 3, 0, 2, 0, 6, 0, 9, 10, 13])
  37. inputs.shape torch.Size([32, 12])

这里展示的就是一个batch的数据情况,接下来就开始构造我们的RNN模型,这里用的是GRU

  1. class RNNClassifier(torch.nn.Module):
  2. def __init__(self, vocab_size, embedding_size, output_size, n_layers=1, bidirectional=True):
  3. super(RNNClassifier, self).__init__()
  4. self.hidden_size = embedding_size
  5. self.n_layers = n_layers
  6. self.n_directions = 2 if bidirectional else 1
  7. self.embedding = torch.nn.Embedding(vocab_size, embedding_size)#input.shape=(seqlen,batch) output.shape=(seqlen,batch,hiddensize)
  8. self.gru = torch.nn.GRU(embedding_size, self.hidden_size , n_layers, bidirectional=bidirectional)
  9. self.fc = torch.nn.Linear(self.hidden_size * self.n_directions, output_size)
  10. def forward(self, input, seq_lengths):
  11. input = input.t()
  12. print('input.shape',input.shape)
  13. batch_size = input.size(1)
  14. hidden =self._init_hidden(batch_size)
  15. print('hidden shape',hidden.shape)
  16. embedding = self.embedding(input)
  17. print('embedding shape',embedding.shape)
  18. seq_lengths = seq_lengths.cpu()
  19. output, hidden = self.gru(embedding, hidden)
  20. print("output, hidden",output.shape, hidden.shape)
  21. if self.n_directions ==2:
  22. hidden_cat = torch.cat([hidden[-1], hidden[-2]], dim=1)
  23. else:
  24. hidden_cat = hidden[-1]
  25. fc_output = self.fc(hidden_cat)
  26. return fc_output
  27. def _init_hidden(self,batch_size):
  28. hidden = torch.zeros(self.n_layers * self.n_directions, batch_size, self.hidden_size)
  29. return create_tensor(hidden)

然后开始训练模型

  1. def trainModel():
  2. total_loss = 0
  3. for i, (names, countries) in enumerate(trainloader, 1):
  4. optimizer.zero_grad()
  5. inputs, seq_lengths, target = make_tensors(names, countries)
  6. output = classifier(inputs, seq_lengths) #把输入和序列放入分类器
  7. loss = criterion(output, target) #计算损失
  8. loss.backward()
  9. optimizer.step()
  10. total_loss += loss.item()
  11. #打印输出结果
  12. if i == len(trainset) // BATCH_SIZE :
  13. print(f'loss={total_loss / (i * len(inputs))}')
  14. return total_loss
  15. print("Train for %d epochs..." % N_EPOCHS)
  16. classifier = RNNClassifier(N_CHARS, HIDDEN_SIZE, N_COUNTRY, N_LAYER)
  17. if USE_GPU:
  18. device = torch.device('cuda:0')
  19. classifier.to(device)
  20. criterion = torch.nn.CrossEntropyLoss()
  21. optimizer = torch.optim.Adam(classifier.parameters(), lr = 0.001)
  22. for epoch in range(1, N_EPOCHS+1):
  23. #训练
  24. print('%d / %d:' % (epoch, N_EPOCHS))
  25. trainModel()

我们可以看一下维度的情况(只截取了部分结果):,

  1. Train for 20 epochs...
  2. 1 / 20:
  3. input.shape torch.Size([10, 32])
  4. hidden shape torch.Size([4, 32, 100])
  5. embedding shape torch.Size([10, 32, 100])
  6. output, hidden torch.Size([10, 32, 200]) torch.Size([4, 32, 100])
  7. input.shape torch.Size([14, 32])
  8. hidden shape torch.Size([4, 32, 100])
  9. embedding shape torch.Size([14, 32, 100])
  10. output, hidden torch.Size([14, 32, 200]) torch.Size([4, 32, 100])
  11. input.shape torch.Size([13, 32])
  12. hidden shape torch.Size([4, 32, 100])
  13. embedding shape torch.Size([13, 32, 100])
  14. output, hidden torch.Size([13, 32, 200]) torch.Size([4, 32, 100])

input就是我们经过向量化的一个batch向量,因为在这里进行了转置,由之前的(batch_size, seq_len)变为了(seq_len,batch_size),其中seq_len就对应于一个batch中最长的单词(句子)长度

hidden就是GRU中隐藏层输出(最开始我们先初始化h0),它对应的维度是(n_layers * n_directions, batch_size, hidden_size)

embedding代表的是input经过nn.Embedding层计算后得到的结果,他的维度是:(seq_len,batch_size, embedding_size)

经过GRU模型后有两个输出结果,output:(seq_len, batch, n_directions*hidden_size),hn:(n_layers*n_directions, batch, hiddem_size)

这里的output和hn分别对应的是中间所有输出以及最后隐藏层输出:

 也可以直接看官方说明:

 这篇文章主要讲了一下RNN模型的维度问题,希望对大家有帮助!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/你好赵伟/article/detail/327555
推荐阅读
相关标签
  

闽ICP备14008679号