当前位置:   article > 正文

目标检测算法——YOLOv5/YOLOv7改进|结合轻量型网络ShuffleNetV2_flownet与yolov5

flownet与yolov5

深度学习Tricks,第一时间送达

论文题目:《ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design》
论文地址:  https://arxiv.org/abs/1807.11164

FLOPS:每秒浮点运算次数,这个由硬件决定。
GFLOPS:每秒10亿次的浮点运算数,1GFlops = 1,000MFlops。
FLOPs:指计算量的大小,理解为计算量。可以用来衡量算法/模型的复杂度。具体指的是multiply-add数量,计算模型中乘法和加法的运算次数。对于普通卷积层而言:FLOPs = 2*H*W( CinK2+1)Cout。
maccs:是multiply-accumulate operations,指点积运算,一个 macc = 2FLOPs。
MAC:memory access cost 内存访问消耗,这里举个例子更好理解,比如1x1的卷积:MAC=H*W*Cin+H*W*Cout+1*1*Cin*Cout,第一项是输入特征层的内存访问消耗。第二项是输出特征层内存访问消耗,第三项是卷积核的内存访问消耗。

ShuffleNetV2中提出了一个关键点,之前的轻量级网络都是通过计算网络复杂度的一个间接度量,即FLOPs为指导。通过计算浮点运算量来描述轻量级网络的快慢。但是从来不直接考虑运行的速度。在移动设备中的运行速度不仅仅需要考虑FLOPs,还需要考虑其他的因素,比如内存访问成本(memory access cost)和平台特点(platform characterics)。为了衡量计算复杂度,一个广泛使用的指标是浮点运算次数,即FLOPs。然而,FLOPs是一种非直接的指标,是对直接指标例如速度和时延的近似,且并不是等价的,而直接指标才是真正需要关心的。

小海带有创新地将YOLOv5算法与轻量级网络ShuffleNetV2相结合,在保证模型检测精度基本不变的情况下,大大降低计算量FLOPS,并可以明显提高网络推理速度!!!近期较忙,代码咨询的小伙伴请私聊~

ShuffleNetV2代码:

  1. def split(x, groups):
  2. out = x.chunk(groups, dim=1)
  3. return out
  4. def shuffle( x, groups):
  5. N, C, H, W = x.size()
  6. out = x.view(N, groups, C // groups, H, W).permute(0, 2, 1, 3, 4).contiguous().view(N, C, H, W)
  7. return out
  8. class ShuffleUnit(nn.Module):
  9. def __init__(self, in_channels, out_channels, stride):
  10. super().__init__()
  11. mid_channels = out_channels // 2
  12. if stride > 1:
  13. self.branch1 = nn.Sequential(
  14. nn.Conv2d(in_channels, in_channels, 3, stride=stride, padding=1, groups=in_channels, bias=False),
  15. nn.BatchNorm2d(in_channels),
  16. nn.Conv2d(in_channels, mid_channels, 1, bias=False),
  17. nn.BatchNorm2d(mid_channels),
  18. nn.ReLU(inplace=True)
  19. )
  20. self.branch2 = nn.Sequential(
  21. nn.Conv2d(in_channels, mid_channels, 1, bias=False),
  22. nn.BatchNorm2d(mid_channels),
  23. nn.ReLU(inplace=True),
  24. nn.Conv2d(mid_channels, mid_channels, 3, stride=stride, padding=1, groups=mid_channels, bias=False),
  25. nn.BatchNorm2d(mid_channels),
  26. nn.Conv2d(mid_channels, mid_channels, 1, bias=False),
  27. nn.BatchNorm2d(mid_channels),
  28. nn.ReLU(inplace=True)
  29. )
  30. else:
  31. self.branch1 = nn.Sequential()
  32. self.branch2 = nn.Sequential(
  33. nn.Conv2d(mid_channels, mid_channels, 1, bias=False),
  34. nn.BatchNorm2d(mid_channels),
  35. nn.ReLU(inplace=True),
  36. nn.Conv2d(mid_channels, mid_channels, 3, stride=stride, padding=1, groups=mid_channels, bias=False),
  37. nn.BatchNorm2d(mid_channels),
  38. nn.Conv2d(mid_channels, mid_channels, 1, bias=False),
  39. nn.BatchNorm2d(mid_channels),
  40. nn.ReLU(inplace=True)
  41. )
  42. self.stride = stride
  43. def forward(self, x):
  44. if self.stride == 1:
  45. x1, x2 = split(x, 2)
  46. out = torch.cat((self.branch1(x1), self.branch2(x2)), dim=1)
  47. else:
  48. out = torch.cat((self.branch1(x), self.branch2(x)), dim=1)
  49. out = shuffle(out, 2)
  50. return out
  51. class ShuffleNetV2(nn.Module):
  52. def __init__(self, channel_num, class_num=settings.CLASSES_NUM):
  53. super().__init__()
  54. self.conv1 = nn.Sequential(
  55. nn.Conv2d(3, 24, 3, stride=2, padding=1, bias=False),
  56. nn.BatchNorm2d(24),
  57. nn.ReLU(inplace=True)
  58. )
  59. self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
  60. self.stage2 = self.make_layers(24, channel_num[0], 4, 2)
  61. self.stage3 = self.make_layers(channel_num[0], channel_num[1], 8, 2)
  62. self.stage4 = self.make_layers(channel_num[1], channel_num[2], 4, 2)
  63. self.conv5 = nn.Sequential(
  64. nn.Conv2d(channel_num[2], 1024, 1, bias=False),
  65. nn.BatchNorm2d(1024),
  66. nn.ReLU(inplace=True)
  67. )
  68. self.avgpool = nn.AdaptiveAvgPool2d(1)
  69. self.fc = nn.Linear(1024, class_num)
  70. def make_layers(self, in_channels, out_channels, layers_num, stride):
  71. layers = []
  72. layers.append(ShuffleUnit(in_channels, out_channels, stride))
  73. in_channels = out_channels
  74. for i in range(layers_num - 1):
  75. ShuffleUnit(in_channels, out_channels, 1)
  76. return nn.Sequential(*layers)
  77. def forward(self, x):
  78. out = self.conv1(x)
  79. out = self.maxpool(out)
  80. out = self.stage2(out)
  81. out = self.stage3(out)
  82. out = self.stage4(out)
  83. out = self.conv5(out)
  84. out = self.avgpool(out)
  85. out = out.flatten(1)
  86. out = self.fc(out)
  87. return out

亲测,YOLOv5-ShuffleNetV2的参数量和算力是碾压式的小,并且具备神一样的检测速度。


声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/402551

推荐阅读
相关标签