当前位置:   article > 正文

pytorch从0开始搭建resnet网络_使用torch构建resnet200

使用torch构建resnet200

第一篇博文,就是想记录一下学习过程,最终目的是实现faster rcnn,本科生跟着老师学习目标检测。

我是参考了b站up主霹雳吧啦Wz的利用pytorch搭建resnet网络的视频,这里附上链接6.2 使用pytorch搭建ResNet并基于迁移学习训练

想要搭建resnet网络,首先我们得参考它的原理图

 第一首先无论是resnet几层的网络,它的conv1和conv2_x的maxpool都是一样的

  1. import torch.nn as nn
  2. class resnet(nn.Module):
  3. def __init__(self):
  4. super(resnet,self).__init__()
  5. #假设输入图片大小为600x600x3
  6. #600x600x3-->300x300x64
  7. self.conv1=nn.Conv2d(3,64,kernel_size=7,stride=2,padding=3,bias=False)
  8. self.bn=nn.BatchNorm2d(64)
  9. self.relu=nn.ReLU(inplace=True)
  10. #ceil_mode向上取整
  11. self.maxpooling=nn.MaxPool2d(kernel_size=3,stride=2,ceil_mode=True)
  12. def forward(self, x):
  13. x = self.conv1(x)
  14. x = self.bn(x)
  15. x = self.relu(x)
  16. x = self.maxpooling(x)
  17. return x
  18. net=resnet()
  19. print(net)

可以打印看下

  1. resnet(
  2. (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  3. (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  4. (relu): ReLU(inplace=True)
  5. (maxpooling): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
  6. )

然后我们就要进入到残差结构中了,这里conv2_x到conv5_x简称第一层到第四层,可以看到resnet18、34层中第一层到第四层内的层结构内它们的卷积核维度是不变的,但是resnet50、101、152的层内的层结构就不一样了,每一层层结构内的卷积核三的维度都是卷积核一的三倍,所以对于这两类不同的层结构,我们需要定义两类结构来区别18、34层和50、101、152层。

对于resnet18、34层它每一个层结构中有两个卷积核,且维度不变,但是它每一个层结构的第一层需要对上一层的图片宽高减半,维度乘2,除了第一层有一个maxpooling层是特殊的,这里的downsample内容尚未定义,代码如下

  1. import torch.nn as nn
  2. import torch
  3. #定义1834层的瓶颈结构,也就是每一层里的结构
  4. #如果维度改变则需要将输出加上downsample
  5. class bottleneck1(nn.Module):
  6. #层结构中卷积核的维度是一样的
  7. expansion=1
  8. def __init__(self,in_channels,out_channels,stride=1,downsample=None):
  9. super(bottleneck1, self).__init__()
  10. self.conv1=nn.Conv2d(in_channels=in_channels,out_channels=out_channels,kernel_size=3,stride=stride,padding=1,bias=False)
  11. self.bn1=nn.BatchNorm2d(out_channels)
  12. self.relu=nn.ReLU(inplace=True)
  13. self.conv2=nn.Conv2d(in_channels=out_channels,out_channels=out_channels,kernel_size=3,stride=1,padding=1,bias=False)
  14. self.bn2=nn.BatchNorm2d(out_channels)
  15. self.downsample=downsample
  16. def forward(self,x):
  17. #判断是否需要加上downsample,是否需要对图片宽高减半
  18. a=x
  19. if self.downsample is True:
  20. a=self.downsample(x)
  21. x=self.conv1(x)
  22. x=self.bn1(x)
  23. x=self.relu(x)
  24. x=self.conv2(x)
  25. x=self.bn2(x)
  26. #如果有downsample则需要加上
  27. x+=a
  28. #将合运用激活函数
  29. x=self.relu(x)
  30. return x

对于resnet50、101、152,它每一层内的第三个卷积核会将维度*4,代码如下

  1. #定义50,101,152的层结构
  2. class bottleneck2(nn.Module):
  3. expansion=4
  4. def __init__(self,in_channels,out_channels,stride=1,downsample=None):
  5. super(bottleneck2,self).__init__()
  6. self.conv1=nn.Conv2d(in_channels=in_channels,out_channels=out_channels,kernel_size=1,stride=1,padding=1,bias=False)
  7. self.bn1=nn.BatchNorm2d(out_channels)
  8. self.conv2=nn.Conv2d(in_channels=out_channels,out_channels=out_channels,kernel_size=3,stride=stride,padding=1,bias=False)
  9. self.bn2=nn.BatchNorm2d(out_channels)
  10. #第三层的维度要阔张4
  11. self.conv3=nn.Conv2d(in_channels=out_channels,out_channels=out_channels*self.expansion,kernel_size=1,stride=1,bias=False)
  12. self.bn3=nn.BatchNorm2d(out_channels*self.expansion)
  13. self.downsample=downsample
  14. self.relu=nn.ReLU(inplace=True)
  15. def forward(self,x):
  16. a=x
  17. if self.downsample is True:
  18. a=self.downsample(x)
  19. x=self.conv1(x)
  20. x=self.bn1(x)
  21. x=self.relu(x)
  22. x=self.conv2(x)
  23. x=self.bn2(x)
  24. x=self.relu(x)
  25. x=self.conv3(x)
  26. x=self.bn3(x)
  27. x+=a
  28. x=self.relu(x)
  29. net=bottleneck2(in_channels=64,out_channels=128,stride=1)
  30. print(net)

看一下输出结果,每一层的第三层需要维度*4

  1. bottleneck2(
  2. (conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), padding=(1, 1), bias=False)
  3. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  4. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  5. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  6. (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  7. (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  8. (relu): ReLU(inplace=True)
  9. )

将这两个瓶颈结构定义完后,需要在resnet类中定义一个函数来包含每一层结构内的所有操作

  1. #block代表使用bottleneck1 or bottleneck2
  2. #channel代表每一层残差结构中第一层的通道数
  3. #block_num代表每一层有多少个残差结构,resnet50为【3,4,6,3
  4. def makelayer(self,block,channel,block_num,stride=1):
  5. downsample=None
  6. #如果步距不为1则代表有残差结构或者expension不为1也有
  7. if stride!=1 or self.in_channel!=channel*block.expansion:
  8. downsample=nn.Sequential(nn.Conv2d(in_channels=self.in_channel,out_channels=channel*block.expansion,kernel_size=1,stride=stride,bias=False),
  9. nn.BatchNorm2d(channel*block.expansion))
  10. #把第一层的结构放到列表里
  11. layers=[]
  12. layers.append(block(self.in_channel,channel,stride,downsample))
  13. #第二层的输入是第一层的输出
  14. self.in_channel=channel*block.expansion
  15. for i in range(1,block_num):
  16. layers.append(block(self.in_channel,channel))
  17. return nn.Sequential(*layers)

定义完了makelayer之后我们就可以开始前向传播了

  1. class resnet(nn.Module):
  2. in_channel = 64
  3. def __init__(self,block,block_num,num_classes=1000):
  4. super(resnet,self).__init__()
  5. #假设输入图片大小为600x600x3
  6. #600x600x3-->300x300x64
  7. self.conv1=nn.Conv2d(3,64,kernel_size=7,stride=2,padding=3,bias=False)
  8. self.bn=nn.BatchNorm2d(64)
  9. self.relu=nn.ReLU(inplace=True)
  10. #ceil_mode向上取整
  11. self.maxpooling=nn.MaxPool2d(kernel_size=3,stride=2,ceil_mode=True)
  12. #第一层步距为1
  13. self.layer1=self.makelayer(block=block,channel=64,block_num=block_num[0])
  14. #从第二层开始,每一层都要downsamole
  15. self.layer2=self.makelayer(block=block,channel=128,block_num=block_num[1],stride=2)
  16. self.layer3=self.makelayer(block=block,channel=256,block_num=block_num[2],stride=2)
  17. self.layer4=self.makelayer(block=block,channel=512,block_num=block_num[3],stride=2)
  18. #自适应平均池化下采样,无论输入图片的高宽是多少,都变成11
  19. self.avgpool=nn.AdaptiveAvgPool2d((1,1))
  20. self.fc=nn.Linear(512*block.expansion,num_classes)
  21. #卷积层初始化
  22. for m in self.modules():
  23. if isinstance(m,nn.Conv2d):
  24. nn.init.kaiming_normal(m.weight,mode="fan_out",nonlinearity='relu')
  25. # self.layer1=
  26. def forward(self,x):
  27. x=self.conv1(x)
  28. x=self.bn(x)
  29. x=self.relu(x)
  30. x=self.maxpooling(x)
  31. x=self.layer1(x)
  32. x=self.layer2(x)
  33. x=self.layer3(x)
  34. x=self.avgpool(x)
  35. x=torch.flatten(x,dims=1)
  36. x=self.fc(x)
  37. return x

最后是平均池化和全连接层,到此resnet网络就定义完毕了,以下附上全部代码

  1. #!/usr/bin/env python
  2. # -*- coding: utf-8 -*-
  3. # @Time : 2022/5/12 16:37
  4. # @Author : 半岛铁盒
  5. # @File : resnet 50.py
  6. # @Software: win10 python3.6
  7. import torch.nn as nn
  8. import torch
  9. #定义1834层的瓶颈结构,也就是每一层里的结构
  10. #如果维度改变则需要将输出加上downsample
  11. class bottleneck1(nn.Module):
  12. #层结构中卷积核的维度是一样的
  13. expansion=1
  14. def __init__(self,in_channels,out_channels,stride=1,downsample=None):
  15. super(bottleneck1, self).__init__()
  16. self.conv1=nn.Conv2d(in_channels=in_channels,out_channels=out_channels,kernel_size=3,stride=stride,padding=1,bias=False)
  17. self.bn1=nn.BatchNorm2d(out_channels)
  18. self.relu=nn.ReLU(inplace=True)
  19. self.conv2=nn.Conv2d(in_channels=out_channels,out_channels=out_channels,kernel_size=3,stride=1,padding=1,bias=False)
  20. self.bn2=nn.BatchNorm2d(out_channels)
  21. self.downsample=downsample
  22. def forward(self,x):
  23. #判断是否需要加上downsample
  24. a=x
  25. if self.downsample is True:
  26. a=self.downsample(x)
  27. x=self.conv1(x)
  28. x=self.bn1(x)
  29. x=self.relu(x)
  30. x=self.conv2(x)
  31. x=self.bn2(x)
  32. #如果有downsample则需要加上
  33. x+=a
  34. #将合运用激活函数
  35. x=self.relu(x)
  36. return x
  37. #定义50,101,152的层结构
  38. class bottleneck2(nn.Module):
  39. expansion=4
  40. def __init__(self,in_channels,out_channels,stride=1,downsample=None):
  41. super(bottleneck2,self).__init__()
  42. self.conv1=nn.Conv2d(in_channels=in_channels,out_channels=out_channels,kernel_size=1,stride=1,bias=False)
  43. self.bn1=nn.BatchNorm2d(out_channels)
  44. self.conv2=nn.Conv2d(in_channels=out_channels,out_channels=out_channels,kernel_size=3,stride=stride,padding=1,bias=False)
  45. self.bn2=nn.BatchNorm2d(out_channels)
  46. #第三层的维度要阔张4
  47. self.conv3=nn.Conv2d(in_channels=out_channels,out_channels=out_channels*self.expansion,kernel_size=1,stride=1,bias=False)
  48. self.bn3=nn.BatchNorm2d(out_channels*self.expansion)
  49. self.downsample=downsample
  50. self.relu=nn.ReLU(inplace=True)
  51. def forward(self,x):
  52. a=x
  53. if self.downsample is True:
  54. a=self.downsample(x)
  55. x=self.conv1(x)
  56. x=self.bn1(x)
  57. x=self.relu(x)
  58. x=self.conv2(x)
  59. x=self.bn2(x)
  60. x=self.relu(x)
  61. x=self.conv3(x)
  62. x=self.bn3(x)
  63. x+=a
  64. x=self.relu(x)
  65. class resnet(nn.Module):
  66. in_channel = 64
  67. def __init__(self,block,block_num,num_classes=1000):
  68. super(resnet,self).__init__()
  69. #假设输入图片大小为600x600x3
  70. #600x600x3-->300x300x64
  71. self.conv1=nn.Conv2d(3,self.in_channel,kernel_size=7,stride=2,padding=3,bias=False)
  72. self.bn=nn.BatchNorm2d(self.in_channel)
  73. self.relu=nn.ReLU(inplace=True)
  74. #ceil_mode向上取整
  75. self.maxpooling=nn.MaxPool2d(kernel_size=3,stride=2,ceil_mode=True)
  76. #第一层步距为1
  77. self.layer1=self.makelayer(block=block,channel=64,block_num=block_num[0])
  78. #从第二层开始,每一层都要downsamole
  79. self.layer2=self.makelayer(block=block,channel=128,block_num=block_num[1],stride=2)
  80. self.layer3=self.makelayer(block=block,channel=256,block_num=block_num[2],stride=2)
  81. self.layer4=self.makelayer(block=block,channel=512,block_num=block_num[3],stride=2)
  82. #自适应平均池化下采样,无论输入图片的高宽是多少,都变成11
  83. self.avgpool=nn.AdaptiveAvgPool2d((1,1))
  84. self.fc=nn.Linear(512*block.expansion,num_classes)
  85. #卷积层初始化
  86. for m in self.modules():
  87. if isinstance(m,nn.Conv2d):
  88. nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
  89. #block代表使用bottleneck1 or bottleneck2
  90. #channel代表每一层残差结构中第一层的通道数
  91. #block_num代表每一层有多少个残差结构,resnet50为【3,4,6,3
  92. def makelayer(self,block,channel,block_num,stride=1):
  93. downsample=None
  94. #如果步距不为1则代表有残差结构或者expension不为1也有
  95. if stride!=1 or self.in_channel!=channel*block.expansion:
  96. downsample=nn.Sequential(nn.Conv2d(in_channels=self.in_channel,out_channels=channel*block.expansion,kernel_size=1,stride=stride,bias=False),
  97. nn.BatchNorm2d(channel*block.expansion))
  98. #把第一层的结构放到列表里
  99. layers=[]
  100. layers.append(block(self.in_channel,channel,stride,downsample))
  101. #第二层的输入是第一层的输出
  102. self.in_channel=channel*block.expansion
  103. for i in range(1,block_num):
  104. layers.append(block(self.in_channel,channel))
  105. return nn.Sequential(*layers)
  106. def forward(self,x):
  107. x=self.conv1(x)
  108. x=self.bn(x)
  109. x=self.relu(x)
  110. x=self.maxpooling(x)
  111. x=self.layer1(x)
  112. x=self.layer2(x)
  113. x=self.layer3(x)
  114. x=self.avgpool(x)
  115. x=torch.flatten(x,dims=1)
  116. x=self.fc(x)
  117. return x
  118. net=resnet(block=bottleneck2,block_num=[3,4,6,3])
  119. print(net)

打印一下最终结果看一下resnet的网络结构

  1. resnet(
  2. (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  3. (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  4. (relu): ReLU(inplace=True)
  5. (maxpooling): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
  6. (layer1): Sequential(
  7. (0): bottleneck2(
  8. (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  9. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  10. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  11. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  12. (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  13. (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  14. (downsample): Sequential(
  15. (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  16. (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  17. )
  18. (relu): ReLU(inplace=True)
  19. )
  20. (1): bottleneck2(
  21. (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  22. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  23. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  24. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  25. (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  26. (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  27. (relu): ReLU(inplace=True)
  28. )
  29. (2): bottleneck2(
  30. (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  31. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  32. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  33. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  34. (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  35. (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  36. (relu): ReLU(inplace=True)
  37. )
  38. )
  39. (layer2): Sequential(
  40. (0): bottleneck2(
  41. (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  42. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  43. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  44. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  45. (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  46. (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  47. (downsample): Sequential(
  48. (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
  49. (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  50. )
  51. (relu): ReLU(inplace=True)
  52. )
  53. (1): bottleneck2(
  54. (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  55. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  56. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  57. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  58. (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  59. (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  60. (relu): ReLU(inplace=True)
  61. )
  62. (2): bottleneck2(
  63. (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  64. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  65. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  66. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  67. (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  68. (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  69. (relu): ReLU(inplace=True)
  70. )
  71. (3): bottleneck2(
  72. (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  73. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  74. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  75. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  76. (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  77. (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  78. (relu): ReLU(inplace=True)
  79. )
  80. )
  81. (layer3): Sequential(
  82. (0): bottleneck2(
  83. (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  84. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  85. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  86. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  87. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  88. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  89. (downsample): Sequential(
  90. (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
  91. (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  92. )
  93. (relu): ReLU(inplace=True)
  94. )
  95. (1): bottleneck2(
  96. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  97. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  98. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  99. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  100. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  101. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  102. (relu): ReLU(inplace=True)
  103. )
  104. (2): bottleneck2(
  105. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  106. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  107. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  108. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  109. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  110. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  111. (relu): ReLU(inplace=True)
  112. )
  113. (3): bottleneck2(
  114. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  115. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  116. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  117. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  118. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  119. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  120. (relu): ReLU(inplace=True)
  121. )
  122. (4): bottleneck2(
  123. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  124. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  125. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  126. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  127. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  128. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  129. (relu): ReLU(inplace=True)
  130. )
  131. (5): bottleneck2(
  132. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  133. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  134. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  135. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  136. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  137. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  138. (relu): ReLU(inplace=True)
  139. )
  140. )
  141. (layer4): Sequential(
  142. (0): bottleneck2(
  143. (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  144. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  145. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  146. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  147. (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
  148. (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  149. (downsample): Sequential(
  150. (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
  151. (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  152. )
  153. (relu): ReLU(inplace=True)
  154. )
  155. (1): bottleneck2(
  156. (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  157. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  158. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  159. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  160. (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
  161. (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  162. (relu): ReLU(inplace=True)
  163. )
  164. (2): bottleneck2(
  165. (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  166. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  167. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  168. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  169. (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
  170. (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  171. (relu): ReLU(inplace=True)
  172. )
  173. )
  174. (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  175. (fc): Linear(in_features=2048, out_features=1000, bias=True)
  176. )

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Monodyee/article/detail/73064
推荐阅读
相关标签
  

闽ICP备14008679号