当前位置:   article > 正文

深度学习:07 CNN经典模型总结(LeNet-5、AlexNet、VGG、GoogLeNet、ResNet)_cnn模型

cnn模型

目录

CNN经典网络模型

LeNet-5

AlexNet

VGG

GoogLeNet (Inception)

ResNet

如何选择网络


CNN经典网络模型

以下介绍了LeNet-5、AlexNet、VGG、GoogLeNet、ResNet等,它们通常用于图像的数据处理,那么卷积神经网络是否应用于自然语言分类任务呢,其实还有一种利用CNN进行自然语言处理的网络结构——TextCNN网络,感兴趣的大家可以去了解。

LeNet-5

1998, Yann LeCun 的 LeNet5 官网

卷积神经网路的开山之作,麻雀虽小,但五脏俱全,卷积层、pooling层、全连接层,这些都是现代CNN网络的基本组件

  • 用卷积提取空间特征;
  • 由空间平均得到子样本;
  • 用 tanh 或 sigmoid 得到非线性;
  • 用 multi-layer neural network(MLP)作为最终分类器;
  • 层层之间用稀疏的连接矩阵,以避免大的计算成本。

输入:图像Size为32*32,这要比mnist数据库中最大的字母(28*28)还大。这样做的目的是希望潜在的明显特征,如笔画断续、角点能够出现在最高层特征监测子感受野的中心。

输出:10个类别,分别为0-9数字的概率

  1. C1层是一个卷积层,有6个卷积核(提取6种局部特征),核大小为5 * 5,针对一张32 * 32的灰度图像会输出6个28 * 28的特征映射。
  2. S2层是pooling层,下采样(区域:2 * 2 ),步长为2,将6个28 * 28的特征映射转化为6个14 * 14的特征映射,降低网络训练参数及模型的过拟合程度。
  3. C3层是第二个卷积层,使用16个卷积核,核大小:5 * 5 提取特征,不使用填补操作,将6个14 * 14的特征映射卷积运算后输出16个10 * 10的特征映射
  4. S4层也是一个pooling层,区域:2*2,步长为2,从而将16个10 * 10的特征映射转化为16个5 * 5的特征映射
  5. C5层是最后一个卷积层,卷积核大小:5 * 5 卷积核种类:120,神经元数量为120
  6. 最后使用全连接层,将C5的120个特征进行分类,最后输出0-9的概率,神经元数量为84

以下代码来自官方教程

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. class LeNet5(nn.Module):
  5. def __init__(self):
  6. super(LeNet5, self).__init__()
  7. # 1 input image channel, 6 output channels, 5x5 square convolution
  8. # kernel
  9. self.conv1 = nn.Conv2d(1, 6, 5)
  10. self.conv2 = nn.Conv2d(6, 16, 5)
  11. # an affine operation: y = Wx + b
  12. self.fc1 = nn.Linear(16 * 5 * 5, 120) # 这里论文上写的是conv,官方教程用了线性层
  13. self.fc2 = nn.Linear(120, 84)
  14. self.fc3 = nn.Linear(84, 10)
  15. def forward(self, x):
  16. # Max pooling over a (2, 2) window
  17. x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
  18. # If the size is a square you can only specify a single number
  19. x = F.max_pool2d(F.relu(self.conv2(x)), 2)
  20. x = x.view(-1, self.num_flat_features(x))
  21. x = F.relu(self.fc1(x))
  22. x = F.relu(self.fc2(x))
  23. x = self.fc3(x)
  24. return x
  25. def num_flat_features(self, x):
  26. size = x.size()[1:] # all dimensions except the batch dimension
  27. num_features = 1
  28. for s in size:
  29. num_features *= s
  30. return num_features
  31. net = LeNet5()
  32. print(net)

输出:

  1. LeNet5(
  2. (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
  3. (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
  4. (fc1): Linear(in_features=400, out_features=120, bias=True)
  5. (fc2): Linear(in_features=120, out_features=84, bias=True)
  6. (fc3): Linear(in_features=84, out_features=10, bias=True)
  7. )

AlexNet

2012,Alex Krizhevsky 可以算作LeNet的一个更深和更广的版本,可以用来学习更复杂的对象 论文

  • 用rectified linear units(ReLU)得到非线性;
  • 使用 dropout 技巧在训练期间有选择性地忽略单个神经元,来减缓模型的过拟合;
  • 重叠最大池,避免平均池的平均效果;
  • 使用 GPU NVIDIA GTX 580 可以减少训练时间,这比用CPU处理快了 10 倍,所以可以被用于更大的数据集和图像上。虽然 AlexNet只有8层,但是它有60M以上的参数总量,Alexnet有一个特殊的计算层,LRN层,做的事是对当前层的输出结果做平滑处理,这里就不做详细介绍了, Alexnet的每一阶段(含一次卷积主要计算的算作一层)可以分为8层:
  • con - relu - pooling - LRN : 要注意的是input层是227*227,而不是paper里面的224,这里可以算一下,主要是227可以整除后面的conv1计算,224不整除。如果一定要用224可以通过自动补边实现,不过在input就补边感觉没有意义,补得也是0,这就是我们上面说的公式的重要性。
  • conv - relu - pool - LRN : group=2,这个属性强行把前面结果的feature map分开,卷积部分分成两部分做
  • conv - relu
  • conv - relu
  • conv - relu - pool
  • fc - relu - dropout : dropout层,在alexnet中是说在训练的以1/2概率使得隐藏层的某些neuron的输出为0,这样就丢到了一半节点的输出,BP的时候也不更新这些节点,防止过拟合。
  • fc - relu - dropout
  • fc - softmax

在Pytorch的vision包中是包含Alexnet的官方实现的,我们直接使用官方版本看下网络

  1. import torchvision
  2. model = torchvision.models.alexnet(pretrained=False) #我们不下载预训练权重
  3. print(model)

输出:

  1. AlexNet(
  2. (features): Sequential(
  3. (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
  4. (1): ReLU(inplace=True)
  5. (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  6. (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  7. (4): ReLU(inplace=True)
  8. (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  9. (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  10. (7): ReLU(inplace=True)
  11. (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  12. (9): ReLU(inplace=True)
  13. (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  14. (11): ReLU(inplace=True)
  15. (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  16. )
  17. (avgpool): AdaptiveAvgPool2d(output_size=(6, 6))
  18. (classifier): Sequential(
  19. (0): Dropout(p=0.5, inplace=False)
  20. (1): Linear(in_features=9216, out_features=4096, bias=True)
  21. (2): ReLU(inplace=True)
  22. (3): Dropout(p=0.5, inplace=False)
  23. (4): Linear(in_features=4096, out_features=4096, bias=True)
  24. (5): ReLU(inplace=True)
  25. (6): Linear(in_features=4096, out_features=1000, bias=True)
  26. )
  27. )

VGG

2015,牛津的 VGG。论文

  • 每个卷积层中使用更小的 3×3 filters,并将它们组合成卷积序列
  • 多个3×3卷积序列可以模拟更大的接收场的效果
  • 每次的图像像素缩小一倍,卷积核的数量增加一倍

VGG有很多个版本,也算是比较稳定和经典的model。它的特点也是连续conv多计算量巨大,这里我们以VGG16为例.图片来源VGG清一色用小卷积核,结合作者和自己的观点,这里整理出小卷积核比用大卷积核的优势:

根据作者的观点,input8 -> 3层conv3x3后,output=2,等同于1层conv7x7的结果; input=8 -> 2层conv3x3后,output=2,等同于2层conv5x5的结果

卷积层的参数减少。相比5x5、7x7和11x11的大卷积核,3x3明显地减少了参数量

通过卷积和池化层后,图像的分辨率降低为原来的一半,但是图像的特征增加一倍,这是一个十分规整的操作: 分辨率由输入的224->112->56->28->14->7, 特征从原始的RGB3个通道-> 64 ->128 -> 256 -> 512

这为后面的网络提供了一个标准,我们依旧使用Pytorch官方实现版本来查看

  1. import torchvision
  2. model = torchvision.models.vgg16(pretrained=False) #我们不下载预训练权重
  3. print(model)

输出:

  1. VGG(
  2. (features): Sequential(
  3. (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  4. (1): ReLU(inplace=True)
  5. (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  6. (3): ReLU(inplace=True)
  7. (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  8. (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  9. (6): ReLU(inplace=True)
  10. (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  11. (8): ReLU(inplace=True)
  12. (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  13. (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  14. (11): ReLU(inplace=True)
  15. (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  16. (13): ReLU(inplace=True)
  17. (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  18. (15): ReLU(inplace=True)
  19. (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  20. (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  21. (18): ReLU(inplace=True)
  22. (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  23. (20): ReLU(inplace=True)
  24. (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  25. (22): ReLU(inplace=True)
  26. (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  27. (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  28. (25): ReLU(inplace=True)
  29. (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  30. (27): ReLU(inplace=True)
  31. (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  32. (29): ReLU(inplace=True)
  33. (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  34. )
  35. (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  36. (classifier): Sequential(
  37. (0): Linear(in_features=25088, out_features=4096, bias=True)
  38. (1): ReLU(inplace=True)
  39. (2): Dropout(p=0.5, inplace=False)
  40. (3): Linear(in_features=4096, out_features=4096, bias=True)
  41. (4): ReLU(inplace=True)
  42. (5): Dropout(p=0.5, inplace=False)
  43. (6): Linear(in_features=4096, out_features=1000, bias=True)
  44. )
  45. )

GoogLeNet (Inception)

2014,Google Christian Szegedy 论文

  • 使用1×1卷积块(NiN)来减少特征数量,这通常被称为“瓶颈”,可以减少深层神经网络的计算负担。
  • 每个池化层之前,增加 feature maps,增加每一层的宽度来增多特征的组合性

googlenet最大的特点就是包含若干个inception模块,所以有时候也称作 inception net。 googlenet虽然层数要比VGG多很多,但是由于inception的设计,计算速度方面要快很多。

不要被这个图吓到,其实原理很简单

Inception架构的主要思想是找出如何让已有的稠密组件接近与覆盖卷积视觉网络中的最佳局部稀疏结构。现在需要找出最优的局部构造,并且重复几次。之前的一篇文献提出一个层与层的结构,在最后一层进行相关性统计,将高相关性的聚集到一起。这些聚类构成下一层的单元,且与上一层单元连接。假设前面层的每个单元对应于输入图像的某些区域,这些单元被分为滤波器组。在接近输入层的低层中,相关单元集中在某些局部区域,最终得到在单个区域中的大量聚类,在最后一层通过1x1的卷积覆盖。

上面的话听起来很生硬,其实解释起来很简单:每一模块我们都是用若干个不同的特征提取方式,例如 3x3卷积,5x5卷积,1x1的卷积,pooling等,都计算一下,最后再把这些结果通过Filter Concat来进行连接,找到这里面作用最大的。而网络里面包含了许多这样的模块,这样不用我们人为去判断哪个特征提取方式好,网络会自己解决(是不是有点像AUTO ML),在Pytorch中实现了InceptionA-E,还有InceptionAUX 模块。

  1. # inception_v3需要scipy,所以没有安装的话pip install scipy 一下
  2. import torchvision
  3. model = torchvision.models.inception_v3(pretrained=False) #我们不下载预训练权重
  4. print(model)

 输出:

  1. Inception3(
  2. (Conv2d_1a_3x3): BasicConv2d(
  3. (conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), bias=False)
  4. (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  5. )
  6. (Conv2d_2a_3x3): BasicConv2d(
  7. (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), bias=False)
  8. (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  9. )
  10. (Conv2d_2b_3x3): BasicConv2d(
  11. (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  12. (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  13. )
  14. (maxpool1): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  15. (Conv2d_3b_1x1): BasicConv2d(
  16. (conv): Conv2d(64, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
  17. (bn): BatchNorm2d(80, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  18. )
  19. (Conv2d_4a_3x3): BasicConv2d(
  20. (conv): Conv2d(80, 192, kernel_size=(3, 3), stride=(1, 1), bias=False)
  21. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  22. )
  23. (maxpool2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  24. (Mixed_5b): InceptionA(
  25. (branch1x1): BasicConv2d(
  26. (conv): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  27. (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  28. )
  29. (branch5x5_1): BasicConv2d(
  30. (conv): Conv2d(192, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
  31. (bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  32. )
  33. (branch5x5_2): BasicConv2d(
  34. (conv): Conv2d(48, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), bias=False)
  35. (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  36. )
  37. (branch3x3dbl_1): BasicConv2d(
  38. (conv): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  39. (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  40. )
  41. (branch3x3dbl_2): BasicConv2d(
  42. (conv): Conv2d(64, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  43. (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  44. )
  45. (branch3x3dbl_3): BasicConv2d(
  46. (conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  47. (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  48. )
  49. (branch_pool): BasicConv2d(
  50. (conv): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
  51. (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  52. )
  53. )
  54. (Mixed_5c): InceptionA(
  55. (branch1x1): BasicConv2d(
  56. (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  57. (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  58. )
  59. (branch5x5_1): BasicConv2d(
  60. (conv): Conv2d(256, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
  61. (bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  62. )
  63. (branch5x5_2): BasicConv2d(
  64. (conv): Conv2d(48, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), bias=False)
  65. (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  66. )
  67. (branch3x3dbl_1): BasicConv2d(
  68. (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  69. (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  70. )
  71. (branch3x3dbl_2): BasicConv2d(
  72. (conv): Conv2d(64, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  73. (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  74. )
  75. (branch3x3dbl_3): BasicConv2d(
  76. (conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  77. (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  78. )
  79. (branch_pool): BasicConv2d(
  80. (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  81. (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  82. )
  83. )
  84. (Mixed_5d): InceptionA(
  85. (branch1x1): BasicConv2d(
  86. (conv): Conv2d(288, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  87. (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  88. )
  89. (branch5x5_1): BasicConv2d(
  90. (conv): Conv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
  91. (bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  92. )
  93. (branch5x5_2): BasicConv2d(
  94. (conv): Conv2d(48, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), bias=False)
  95. (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  96. )
  97. (branch3x3dbl_1): BasicConv2d(
  98. (conv): Conv2d(288, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  99. (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  100. )
  101. (branch3x3dbl_2): BasicConv2d(
  102. (conv): Conv2d(64, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  103. (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  104. )
  105. (branch3x3dbl_3): BasicConv2d(
  106. (conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  107. (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  108. )
  109. (branch_pool): BasicConv2d(
  110. (conv): Conv2d(288, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  111. (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  112. )
  113. )
  114. (Mixed_6a): InceptionB(
  115. (branch3x3): BasicConv2d(
  116. (conv): Conv2d(288, 384, kernel_size=(3, 3), stride=(2, 2), bias=False)
  117. (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  118. )
  119. (branch3x3dbl_1): BasicConv2d(
  120. (conv): Conv2d(288, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  121. (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  122. )
  123. (branch3x3dbl_2): BasicConv2d(
  124. (conv): Conv2d(64, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  125. (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  126. )
  127. (branch3x3dbl_3): BasicConv2d(
  128. (conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), bias=False)
  129. (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  130. )
  131. )
  132. (Mixed_6b): InceptionC(
  133. (branch1x1): BasicConv2d(
  134. (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  135. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  136. )
  137. (branch7x7_1): BasicConv2d(
  138. (conv): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  139. (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  140. )
  141. (branch7x7_2): BasicConv2d(
  142. (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
  143. (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  144. )
  145. (branch7x7_3): BasicConv2d(
  146. (conv): Conv2d(128, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
  147. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  148. )
  149. (branch7x7dbl_1): BasicConv2d(
  150. (conv): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  151. (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  152. )
  153. (branch7x7dbl_2): BasicConv2d(
  154. (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
  155. (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  156. )
  157. (branch7x7dbl_3): BasicConv2d(
  158. (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
  159. (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  160. )
  161. (branch7x7dbl_4): BasicConv2d(
  162. (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
  163. (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  164. )
  165. (branch7x7dbl_5): BasicConv2d(
  166. (conv): Conv2d(128, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
  167. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  168. )
  169. (branch_pool): BasicConv2d(
  170. (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  171. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  172. )
  173. )
  174. (Mixed_6c): InceptionC(
  175. (branch1x1): BasicConv2d(
  176. (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  177. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  178. )
  179. (branch7x7_1): BasicConv2d(
  180. (conv): Conv2d(768, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
  181. (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  182. )
  183. (branch7x7_2): BasicConv2d(
  184. (conv): Conv2d(160, 160, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
  185. (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  186. )
  187. (branch7x7_3): BasicConv2d(
  188. (conv): Conv2d(160, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
  189. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  190. )
  191. (branch7x7dbl_1): BasicConv2d(
  192. (conv): Conv2d(768, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
  193. (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  194. )
  195. (branch7x7dbl_2): BasicConv2d(
  196. (conv): Conv2d(160, 160, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
  197. (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  198. )
  199. (branch7x7dbl_3): BasicConv2d(
  200. (conv): Conv2d(160, 160, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
  201. (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  202. )
  203. (branch7x7dbl_4): BasicConv2d(
  204. (conv): Conv2d(160, 160, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
  205. (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  206. )
  207. (branch7x7dbl_5): BasicConv2d(
  208. (conv): Conv2d(160, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
  209. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  210. )
  211. (branch_pool): BasicConv2d(
  212. (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  213. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  214. )
  215. )
  216. (Mixed_6d): InceptionC(
  217. (branch1x1): BasicConv2d(
  218. (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  219. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  220. )
  221. (branch7x7_1): BasicConv2d(
  222. (conv): Conv2d(768, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
  223. (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  224. )
  225. (branch7x7_2): BasicConv2d(
  226. (conv): Conv2d(160, 160, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
  227. (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  228. )
  229. (branch7x7_3): BasicConv2d(
  230. (conv): Conv2d(160, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
  231. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  232. )
  233. (branch7x7dbl_1): BasicConv2d(
  234. (conv): Conv2d(768, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
  235. (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  236. )
  237. (branch7x7dbl_2): BasicConv2d(
  238. (conv): Conv2d(160, 160, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
  239. (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  240. )
  241. (branch7x7dbl_3): BasicConv2d(
  242. (conv): Conv2d(160, 160, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
  243. (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  244. )
  245. (branch7x7dbl_4): BasicConv2d(
  246. (conv): Conv2d(160, 160, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
  247. (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  248. )
  249. (branch7x7dbl_5): BasicConv2d(
  250. (conv): Conv2d(160, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
  251. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  252. )
  253. (branch_pool): BasicConv2d(
  254. (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  255. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  256. )
  257. )
  258. (Mixed_6e): InceptionC(
  259. (branch1x1): BasicConv2d(
  260. (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  261. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  262. )
  263. (branch7x7_1): BasicConv2d(
  264. (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  265. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  266. )
  267. (branch7x7_2): BasicConv2d(
  268. (conv): Conv2d(192, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
  269. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  270. )
  271. (branch7x7_3): BasicConv2d(
  272. (conv): Conv2d(192, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
  273. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  274. )
  275. (branch7x7dbl_1): BasicConv2d(
  276. (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  277. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  278. )
  279. (branch7x7dbl_2): BasicConv2d(
  280. (conv): Conv2d(192, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
  281. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  282. )
  283. (branch7x7dbl_3): BasicConv2d(
  284. (conv): Conv2d(192, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
  285. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  286. )
  287. (branch7x7dbl_4): BasicConv2d(
  288. (conv): Conv2d(192, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
  289. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  290. )
  291. (branch7x7dbl_5): BasicConv2d(
  292. (conv): Conv2d(192, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
  293. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  294. )
  295. (branch_pool): BasicConv2d(
  296. (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  297. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  298. )
  299. )
  300. (AuxLogits): InceptionAux(
  301. (conv0): BasicConv2d(
  302. (conv): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  303. (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  304. )
  305. (conv1): BasicConv2d(
  306. (conv): Conv2d(128, 768, kernel_size=(5, 5), stride=(1, 1), bias=False)
  307. (bn): BatchNorm2d(768, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  308. )
  309. (fc): Linear(in_features=768, out_features=1000, bias=True)
  310. )
  311. (Mixed_7a): InceptionD(
  312. (branch3x3_1): BasicConv2d(
  313. (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  314. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  315. )
  316. (branch3x3_2): BasicConv2d(
  317. (conv): Conv2d(192, 320, kernel_size=(3, 3), stride=(2, 2), bias=False)
  318. (bn): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  319. )
  320. (branch7x7x3_1): BasicConv2d(
  321. (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  322. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  323. )
  324. (branch7x7x3_2): BasicConv2d(
  325. (conv): Conv2d(192, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
  326. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  327. )
  328. (branch7x7x3_3): BasicConv2d(
  329. (conv): Conv2d(192, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
  330. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  331. )
  332. (branch7x7x3_4): BasicConv2d(
  333. (conv): Conv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), bias=False)
  334. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  335. )
  336. )
  337. (Mixed_7b): InceptionE(
  338. (branch1x1): BasicConv2d(
  339. (conv): Conv2d(1280, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
  340. (bn): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  341. )
  342. (branch3x3_1): BasicConv2d(
  343. (conv): Conv2d(1280, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
  344. (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  345. )
  346. (branch3x3_2a): BasicConv2d(
  347. (conv): Conv2d(384, 384, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
  348. (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  349. )
  350. (branch3x3_2b): BasicConv2d(
  351. (conv): Conv2d(384, 384, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
  352. (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  353. )
  354. (branch3x3dbl_1): BasicConv2d(
  355. (conv): Conv2d(1280, 448, kernel_size=(1, 1), stride=(1, 1), bias=False)
  356. (bn): BatchNorm2d(448, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  357. )
  358. (branch3x3dbl_2): BasicConv2d(
  359. (conv): Conv2d(448, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  360. (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  361. )
  362. (branch3x3dbl_3a): BasicConv2d(
  363. (conv): Conv2d(384, 384, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
  364. (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  365. )
  366. (branch3x3dbl_3b): BasicConv2d(
  367. (conv): Conv2d(384, 384, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
  368. (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  369. )
  370. (branch_pool): BasicConv2d(
  371. (conv): Conv2d(1280, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  372. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  373. )
  374. )
  375. (Mixed_7c): InceptionE(
  376. (branch1x1): BasicConv2d(
  377. (conv): Conv2d(2048, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
  378. (bn): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  379. )
  380. (branch3x3_1): BasicConv2d(
  381. (conv): Conv2d(2048, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
  382. (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  383. )
  384. (branch3x3_2a): BasicConv2d(
  385. (conv): Conv2d(384, 384, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
  386. (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  387. )
  388. (branch3x3_2b): BasicConv2d(
  389. (conv): Conv2d(384, 384, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
  390. (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  391. )
  392. (branch3x3dbl_1): BasicConv2d(
  393. (conv): Conv2d(2048, 448, kernel_size=(1, 1), stride=(1, 1), bias=False)
  394. (bn): BatchNorm2d(448, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  395. )
  396. (branch3x3dbl_2): BasicConv2d(
  397. (conv): Conv2d(448, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  398. (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  399. )
  400. (branch3x3dbl_3a): BasicConv2d(
  401. (conv): Conv2d(384, 384, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
  402. (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  403. )
  404. (branch3x3dbl_3b): BasicConv2d(
  405. (conv): Conv2d(384, 384, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
  406. (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  407. )
  408. (branch_pool): BasicConv2d(
  409. (conv): Conv2d(2048, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
  410. (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
  411. )
  412. )
  413. (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  414. (dropout): Dropout(p=0.5, inplace=False)
  415. (fc): Linear(in_features=2048, out_features=1000, bias=True)
  416. )

ResNet

2015,Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun 论文 Kaiming He 何凯明(音译)这个大神大家一定要记住,现在很多论文都有他参与(mask rcnn, focal loss),Jian Sun孙剑老师就不用说了,现在旷视科技的首席科学家。 刚才的GoogLeNet已经很深了,ResNet可以做到更深,通过残差计算,可以训练超过1000层的网络,俗称跳连接

退化问题

网络层数增加,但是在训练集上的准确率却饱和甚至下降了。这个不能解释为overfitting,因为overfit应该表现为在训练集上表现更好才对。这个就是网络退化的问题,退化问题说明了深度网络不能很简单地被很好地优化

残差网络的解决办法

深层网络的后面那些层是恒等映射,那么模型就退化为一个浅层网络。那现在要解决的就是学习恒等映射函数了。让一些层去拟合一个潜在的恒等映射函数H(x) = x,比较困难。如果把网络设计为H(x) = F(x) + x。我们可以转换为学习一个残差函数F(x) = H(x) - x。 只要F(x)=0,就构成了一个恒等映射H(x) = x. 而且,拟合残差肯定更加容易。

以上又很不好理解,继续解释下,先看图:

我们在激活函数前将上一层(或几层)的输出与本层计算的输出相加,将求和的结果输入到激活函数中做为本层的输出,引入残差后的映射对输出的变化更敏感,其实就是看本层相对前几层是否有大的变化,相当于是一个差分放大器的作用。图中的曲线就是残差中的shoutcut,他将前一层的结果直接连接到了本层,也就是俗称的跳连接。

我们以经典的resnet18来看一下网络结构 图片来源

  1. import torchvision
  2. model = torchvision.models.resnet18(pretrained=False) #我们不下载预训练权重
  3. print(model)

输出:

  1. ResNet(
  2. (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  3. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  4. (relu): ReLU(inplace=True)
  5. (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  6. (layer1): Sequential(
  7. (0): BasicBlock(
  8. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  9. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  10. (relu): ReLU(inplace=True)
  11. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  12. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  13. )
  14. (1): BasicBlock(
  15. (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  16. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  17. (relu): ReLU(inplace=True)
  18. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  19. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  20. )
  21. )
  22. (layer2): Sequential(
  23. (0): BasicBlock(
  24. (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  25. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  26. (relu): ReLU(inplace=True)
  27. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  28. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  29. (downsample): Sequential(
  30. (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
  31. (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  32. )
  33. )
  34. (1): BasicBlock(
  35. (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  36. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  37. (relu): ReLU(inplace=True)
  38. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  39. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  40. )
  41. )
  42. (layer3): Sequential(
  43. (0): BasicBlock(
  44. (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  45. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  46. (relu): ReLU(inplace=True)
  47. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  48. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  49. (downsample): Sequential(
  50. (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
  51. (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  52. )
  53. )
  54. (1): BasicBlock(
  55. (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  56. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  57. (relu): ReLU(inplace=True)
  58. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  59. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  60. )
  61. )
  62. (layer4): Sequential(
  63. (0): BasicBlock(
  64. (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  65. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  66. (relu): ReLU(inplace=True)
  67. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  68. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  69. (downsample): Sequential(
  70. (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
  71. (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  72. )
  73. )
  74. (1): BasicBlock(
  75. (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  76. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  77. (relu): ReLU(inplace=True)
  78. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  79. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  80. )
  81. )
  82. (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  83. (fc): Linear(in_features=512, out_features=1000, bias=True)
  84. )

如何选择网络

那么我们该如何选择网络呢? 来源

以上表格可以清楚的看到准确率和计算量之间的对比。我的建议是,小型图片分类任务,resnet18基本上已经可以了,如果真对准确度要求比较高,再选其他更好的网络架构。

另外有句俗话叫:穷人只能AlexNet,富人才用Res

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/很楠不爱3/article/detail/280710
推荐阅读
  

闽ICP备14008679号