当前位置:   article > 正文

pyTorch入门(三)——GoogleNet和ResNet训练

googlenet和resnet

学更好的别人,

做更好的自己。

——《微卡智享》

a2f27bd294859fb5ab5eb1586aada180.jpeg

本文长度为2748,预计阅读8分钟

前言

这是Minist训练的第三篇了,本篇主要是把GoogleNet和ResNet的模型写出来做一个测试,再就是train.py里面代码加入了图例显示。

03f90c695037f556589d2d9ff17d9c00.png

GoogleNet

49321092d6192aa169cd735b62c4e180.png

微卡智享

GoogLeNet是google推出的基于Inception模块的深度神经网络模型,Inception就是把多个卷积或池化操作,放在一起组装成一个网络模块,设计神经网络时以模块为单位去组装整个网络结构,如图:

02685a1cf398d420142eaf10bb56fc64.png

通过Inception的模块化,针对图像的不同尺寸,使用不同的卷积核进行操作,让网络自己去选择,在网络在训练的过程中通过调节参数自己去选择使用。

根据上面的Inceptiion,直接设置网络结构

9a9a52fada66062a53cb4c7710751dd0.png

52ad3613b89c99fa6a8e47499412125b.png

30af68c4ad03e3adf8acaade0ed0aa86.png

f4106eeaffc890e80c4924e838c57f4a.png

19c02593d766a44a87de0e8b6797c535.png

6e1b6f39cbf578ca5ca856fbcc50ca94.png

019c560f77a438115e6c0765aa0a0f02.png

20e5789233f3457ba59c7c9db1850a32.png

直接上源码

ModelGoogleNet.py

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. class Inception(nn.Module):
  5. def __init__(self, in_channels):
  6. super(Inception, self).__init__()
  7. ##Branch的池化层,用卷积1X1来处理,1X1的卷积可以直接将Channel层数
  8. self.branch_pool = nn.Sequential(
  9. nn.AvgPool2d(kernel_size=3, stride=1, padding=1),
  10. nn.Conv2d(in_channels, 24, kernel_size=1)
  11. )
  12. ##Branch1X1层
  13. self.branch1x1 = nn.Sequential(
  14. nn.Conv2d(in_channels, 16, kernel_size=1)
  15. )
  16. ##Branch5x5层, 5X5保持原图像大小需要padding为2,像3x3的卷积padding为1即可
  17. self.branch5x5 = nn.Sequential(
  18. nn.Conv2d(in_channels, 16, kernel_size=1),
  19. nn.Conv2d(16, 24, kernel_size=5, padding=2)
  20. )
  21. ##Branch3x3层
  22. self.branch3x3 = nn.Sequential(
  23. nn.Conv2d(in_channels, 16, kernel_size=1),
  24. nn.Conv2d(16, 24, kernel_size=3, padding=1),
  25. nn.Conv2d(24, 24, kernel_size=3, padding=1)
  26. )
  27. def forward(self, x):
  28. ##池化层
  29. branch_pool = self.branch_pool(x)
  30. ##branch1X1
  31. branch1x1 = self.branch1x1(x)
  32. ##Branch5x5
  33. branch5x5 = self.branch5x5(x)
  34. ##Branch3x3
  35. branch5x5 = self.branch3x3(x)
  36. ##然后做拼接
  37. outputs = [branch_pool, branch1x1, branch5x5, branch5x5]
  38. ##dim=1是为了将channel通道数进行统一, 正常是 B,C,W,H batchsize,channels,width,height
  39. ##输出通道数这里计算,branch_pool=24, branch1x1=16, branch5x5=24, branch3x3=24
  40. ##计算结果就是 24+16+24+24 = 88,在下面Net训练时就知道输入是88通道了
  41. return torch.cat(outputs, dim=1)
  42. class GoogleNet(nn.Module):
  43. def __init__(self):
  44. super(GoogleNet, self).__init__()
  45. ##训练的图像为1X28X28,所以输入通道为1,图像转为10通道后再下采样,再使用用Inception
  46. self.conv1 = nn.Sequential(
  47. nn.Conv2d(1, 10, kernel_size=5),
  48. nn.MaxPool2d(2),
  49. nn.ReLU(),
  50. Inception(10)
  51. )
  52. ##训练的通道由上面的Inception输出,上面计算的输出通道为88,所以这里输入通道就为88
  53. self.conv2 = nn.Sequential(
  54. nn.Conv2d(88, 20, kernel_size=5),
  55. nn.MaxPool2d(2),
  56. nn.ReLU(),
  57. Inception(20)
  58. )
  59. ##全链接层,1408是结过上面的网络全部计算出来的,不用自己算,可以输入的时候看Error来修改
  60. self.fc = nn.Sequential(
  61. nn.Linear(1408, 10)
  62. )
  63. ##定义损失函数
  64. self.criterion = torch.nn.CrossEntropyLoss()
  65. def forward(self, x):
  66. in_size = x.size(0)
  67. x = self.conv1(x)
  68. x = self.conv2(x)
  69. x = x.view(in_size, -1)
  70. x = self.fc(x)
  71. return x

在GoogleNet层里面是做了两遍5X5的卷积,池化,ReLU激活,然后调用Inception,最后再做一个全连接完成,接下来我们直接训练看看效果。

训练结果

4132e286a010efdce34f97fa1f59f0ee.png

上图中可以看到,用GoogleNet的训练,预测率达到了98%了,由于模型的网络结构比较复杂,相应的训练时间也花了29分41秒。

680075f211d04b1ae19646679497900b.png

在train.py中加入了训练的图像显示,左边是loss的曲线,右边是预测率的曲线。

ResNet

188f26e0572b323716f67d48f649d337.png

微卡智享

ResNet是一种残差网络,一般来说,网络越深,特征就会越在学,但随着网络的加深,可能会造成梯度爆炸和梯度消失,从而使得优化效果反而越差,测试数据和训练数据的准确率反而降低了。

ResNet的核心结构图如下:

57db281d0127d22575724e46bc325ef8.png

(ResNet block有两种,一种两层结构,一种三层结构)

接下来我们就实现第一种ResNet block。

ModelResNet.py

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. class ResidualBolck(nn.Module):
  5. def __init__(self, in_channels):
  6. super(ResidualBolck, self).__init__()
  7. self.channels = in_channels
  8. ##确保输入层和输出层一样图像大小,所以padding=1
  9. self.conv1 = nn.Sequential(
  10. nn.Conv2d(in_channels, in_channels, kernel_size=3, padding=1),
  11. nn.ReLU()
  12. )
  13. ##第二层只有一个卷积,所以不用nn.Sequential了
  14. self.conv2 = nn.Conv2d(in_channels, in_channels, kernel_size=3, padding=1)
  15. def forward(self, x):
  16. ##求出第一层
  17. y = self.conv1(x)
  18. ##求出第二层
  19. y = self.conv2(y)
  20. ##通过加上原来X后再用激活,防止梯度归零
  21. y = F.relu(x+y)
  22. return y
  23. class ResNet(nn.Module):
  24. def __init__(self):
  25. super(ResNet, self).__init__()
  26. ##第一层
  27. self.conv1 = nn.Sequential(
  28. nn.Conv2d(1, 16, kernel_size=5),
  29. nn.ReLU(),
  30. nn.MaxPool2d(2),
  31. ResidualBolck(16)
  32. )
  33. ##第二层
  34. self.conv2 = nn.Sequential(
  35. nn.Conv2d(16, 32, kernel_size=5),
  36. nn.ReLU(),
  37. nn.MaxPool2d(2),
  38. ResidualBolck(32)
  39. )
  40. ##全连接层
  41. self.fc = nn.Linear(512, 10)
  42. ##定义损失函数
  43. self.criterion = torch.nn.CrossEntropyLoss()
  44. def forward(self, x):
  45. in_size = x.size(0)
  46. x = self.conv1(x)
  47. x = self.conv2(x)
  48. x = x.view(in_size, -1)
  49. x = self.fc(x)
  50. return x

训练效果

7f70545d71d0350ae99252bb324c8b79.png

877301b96d3156ca62dd149512a4026c.png

从上面两张图可以看出来,ResNet的训练时间要比GoogleNet的训练时间少了一半多,只用了10分零5秒,并且预测率达到了99%多,效果也要比GoogleNet的效果好。

train.py的修改

2ec5065f27a1c3063c1e2c283ce14ad1.png

3b321209bb5f90d8621155593e32134c.png

b2d52e9cb72af79766a128be585bde05.png

8db41953e4b09640ecfdfae25127e849.png

0d2ec7d1f15371808ba0e9585605ef60.png

01b7999a289ce91f2cbfa4a471c433c5.png

654f113033a6a8e260908175df04e1cd.png

上图中都是train.py中修改过的部分,完整的代码如下:

  1. import torch
  2. import time
  3. from torchvision import transforms
  4. from torchvision import datasets
  5. from torch.utils.data import DataLoader
  6. import torch.optim as optim
  7. import matplotlib.pyplot as plt
  8. from pylab import mpl
  9. from ModelLinearNet import LinearNet
  10. from ModelConv2d import Conv2dNet
  11. from ModelGoogleNet import GoogleNet
  12. from ModelResNet import ResNet
  13. ##训练轮数
  14. epoch_times = 10
  15. batch_size = 64
  16. ##设置本次要训练用的模型
  17. train_name = 'ResNet'
  18. print("train_name:" + train_name)
  19. ##设置模型保存名称
  20. savemodel_name = train_name + ".pt"
  21. print("savemodel_name:" + savemodel_name)
  22. ##设置初始预测率,用于判断高于当前预测率的保存模型
  23. toppredicted = 0.0
  24. ##设置学习率
  25. learnrate = 0.01
  26. ##设置动量值,如果上一次的momentnum与本次梯度方向是相同的,梯度下降幅度会拉大,起到加速迭代的作用
  27. momentnum = 0.5
  28. ##生成图用的数组
  29. ##预测值
  30. predict_list = []
  31. ##训练轮次值
  32. epoch_list = []
  33. ##loss值
  34. loss_list = []
  35. transform = transforms.Compose([
  36. transforms.ToTensor(),
  37. transforms.Normalize(mean=(0.1307,), std=(0.3081,))
  38. ]) ##Normalize 里面两个值0.1307是均值mean, 0.3081是标准差std,计算好的直接用了
  39. ##训练数据集位置,如果不存在直接下载
  40. train_dataset = datasets.MNIST(
  41. root = '../datasets/mnist',
  42. train = True,
  43. download = True,
  44. transform = transform
  45. )
  46. ##读取训练数据集
  47. train_dataloader = DataLoader(
  48. dataset= train_dataset,
  49. shuffle=True,
  50. batch_size=batch_size
  51. )
  52. ##测试数据集位置,如果不存在直接下载
  53. test_dataset = datasets.MNIST(
  54. root= '../datasets/mnist',
  55. train= False,
  56. download=True,
  57. transform= transform
  58. )
  59. ##读取测试数据集
  60. test_dataloader = DataLoader(
  61. dataset= test_dataset,
  62. shuffle= True,
  63. batch_size=batch_size
  64. )
  65. ##设置选择训练模型,因为python用的是3.9,用不了match case语法
  66. def switch(train_name):
  67. if train_name == 'LinearNet':
  68. return LinearNet()
  69. elif train_name == 'Conv2dNet':
  70. return Conv2dNet()
  71. elif train_name == 'GoogleNet':
  72. return GoogleNet()
  73. elif train_name == 'ResNet':
  74. return ResNet()
  75. ##定义训练模型
  76. class Net(torch.nn.Module):
  77. def __init__(self, train_name):
  78. super(Net, self).__init__()
  79. self.model = switch(train_name= train_name)
  80. self.criterion = self.model.criterion
  81. def forward(self, x):
  82. x = self.model(x)
  83. return x
  84. model = Net(train_name)
  85. ##加入判断是CPU训练还是GPU训练
  86. device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
  87. model.to(device)
  88. ##优化器
  89. optimizer = optim.SGD(model.parameters(), lr= learnrate, momentum= momentnum)
  90. # optimizer = optim.NAdam(model.parameters(), lr= learnrate)
  91. ##训练函数
  92. def train(epoch):
  93. running_loss = 0.0
  94. current_train = 0.0
  95. model.train()
  96. for batch_idx, data in enumerate(train_dataloader, 0):
  97. inputs, target = data
  98. ##加入CPU和GPU选择
  99. inputs, target = inputs.to(device), target.to(device)
  100. optimizer.zero_grad()
  101. #前馈,反向传播,更新
  102. outputs = model(inputs)
  103. loss = model.criterion(outputs, target)
  104. loss.backward()
  105. optimizer.step()
  106. running_loss += loss.item()
  107. ##计算每300次打印一次学习效果
  108. if batch_idx % 300 == 299:
  109. current_train = current_train + 0.3
  110. current_epoch = epoch + 1 + current_train
  111. epoch_list.append(current_epoch)
  112. current_loss = running_loss / 300
  113. loss_list.append(current_loss)
  114. print('[%d, %5d] loss: %.3f' % (current_epoch, batch_idx + 1, current_loss))
  115. running_loss = 0.0
  116. def test():
  117. correct = 0
  118. total = 0
  119. model.eval()
  120. ##with这里标记是不再计算梯度
  121. with torch.no_grad():
  122. for data in test_dataloader:
  123. inputs, labels = data
  124. ##加入CPU和GPU选择
  125. inputs, labels = inputs.to(device), labels.to(device)
  126. outputs = model(inputs)
  127. ##预测返回的是两列,第一列是下标就是0-9的值,第二列为预测值,下面的dim=1就是找维度1(第二列)最大值输出
  128. _, predicted = torch.max(outputs.data, dim=1)
  129. total += labels.size(0)
  130. correct += (predicted == labels).sum().item()
  131. currentpredicted = (100 * correct / total)
  132. ##用global声明toppredicted,用于在函数内部修改在函数外部声明的全局变量,否则报错
  133. global toppredicted
  134. ##当预测率大于原来的保存模型
  135. if currentpredicted > toppredicted:
  136. toppredicted = currentpredicted
  137. torch.save(model.state_dict(), savemodel_name)
  138. print(savemodel_name+" saved, currentpredicted:%d %%" % currentpredicted)
  139. predict_list.append(currentpredicted)
  140. print('Accuracy on test set: %d %%' % currentpredicted)
  141. ##开始训练
  142. timestart = time.time()
  143. for epoch in range(epoch_times):
  144. train(epoch)
  145. test()
  146. timeend = time.time() - timestart
  147. print("use time: {:.0f}m {:.0f}s".format(timeend // 60, timeend % 60))
  148. ##设置画布显示中文字体
  149. mpl.rcParams["font.sans-serif"] = ["SimHei"]
  150. ##设置正常显示符号
  151. mpl.rcParams["axes.unicode_minus"] = False
  152. ##创建画布
  153. fig, (axloss, axpredict) = plt.subplots(nrows=1, ncols=2, figsize=(8,6))
  154. #loss画布
  155. axloss.plot(epoch_list, loss_list, label = 'loss', color='r')
  156. ##设置刻度
  157. axloss.set_xticks(range(epoch_times)[::1])
  158. axloss.set_xticklabels(range(epoch_times)[::1])
  159. axloss.set_xlabel('训练轮数')
  160. axloss.set_ylabel('数值')
  161. axloss.set_title(train_name+' 损失值')
  162. #添加图例
  163. axloss.legend(loc = 0)
  164. #predict画布
  165. axpredict.plot(range(epoch_times), predict_list, label = 'predict', color='g')
  166. ##设置刻度
  167. axpredict.set_xticks(range(epoch_times)[::1])
  168. axpredict.set_xticklabels(range(epoch_times)[::1])
  169. # axpredict.set_yticks(range(100)[::5])
  170. # axpredict.set_yticklabels(range(100)[::5])
  171. axpredict.set_xlabel('训练轮数')
  172. axpredict.set_ylabel('预测值')
  173. axpredict.set_title(train_name+' 预测值')
  174. #添加图例
  175. axpredict.legend(loc = 0)
  176. #显示图像
  177. plt.show()

4fa57b8ef6345b37663b871a74550f39.png

95d58c58d5abdf7e6fbe57c283643b24.png

往期精彩回顾

 

465696ef4ee7ef3cd64bdbfe86a731e5.jpeg

pyTorch入门(二)——常用网络层函数及卷积神经网络训练

 

 

30221e4b795b9bdbf28739ec2a5d34af.jpeg

pyTorch入门(一)——Minist手写数据识别训练全连接网络

 

 

a11c0b45d57b954588a11a1571620a79.jpeg

超简单的pyTorch训练->onnx模型->C++ OpenCV DNN推理(附源码地址)

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/AllinToyou/article/detail/289168
推荐阅读
相关标签
  

闽ICP备14008679号