当前位置:   article > 正文

YOLOv5 模型学习笔记,超详细模型结构

YOLOv5 模型学习笔记,超详细模型结构

1 前言

        YOLOv5 一共有 yolov5l, yolov5m, yolov5n, yolov5s, yolov5x 五个版本, 本文主要介绍YOLOv5s模型的网络结构。

 

 2 源文件介绍

        YOLOv5所用到的所有模块基本上都保存在 common.py这个文件中,在yolo.py文件中会根据yaml文件调用所需要的模块构建网络模型。

 3 YOLOv5s 网络结构

 3.1 yaml 文件解读

        以第一个卷积模块为例[-1, 1, Conv, [64, 6, 2, 2]],-1 表示模型的输入来自上一层的输入,1代表着这里只包含一个卷积模块,Conv就是卷积层, [64,6,2,2] 就是传入模块的参数,在这里64是输出通道数,6是kernel size,2是stride,最后一个2是padding。关于参数后会在结构图中有详细展示。

        anchors:中包含的是锚点的信息,yolov5在最后生成预测时用到了三个特征层的信息,每一行对应一个特征层,每行中的元素代表着三个不同尺寸锚框的信息。

        需要特别注意的是 depth_multiple 和 width_multiple 这两个量,yolov5在进行模型创建的过程中并不会直接使用yaml文件中的所有参数,每个模块的depth_number乘以depth_multiple,输出通道数乘以width_multiple才是模型最终使用的参数。

  1. # Parameters
  2. nc: 80 # number of classes
  3. depth_multiple: 0.33 # model depth multiple
  4. width_multiple: 0.50 # layer channel multiple
  5. anchors: #锚点的信息
  6. - [10,13, 16,30, 33,23] # P3/8
  7. - [30,61, 62,45, 59,119] # P4/16
  8. - [116,90, 156,198, 373,326] # P5/32
  9. # YOLOv5 v6.0 backbone
  10. backbone:
  11. # [from, depth_number, module, args]
  12. # from:模块输入来自哪里
  13. # depth_number:模块包含几层(主要对应C3模块中存在几个BottleNeck)
  14. # module:模块对应的名称
  15. # args:传入模块的参数
  16. [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
  17. [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
  18. [-1, 3, C3, [128]],
  19. [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
  20. [-1, 6, C3, [256]],
  21. [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
  22. [-1, 9, C3, [512]],
  23. [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
  24. [-1, 3, C3, [1024]],
  25. [-1, 1, SPPF, [1024, 5]], # 9
  26. ]
  27. # YOLOv5 v6.0 head
  28. head:
  29. [[-1, 1, Conv, [512, 1, 1]],
  30. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
  31. [[-1, 6], 1, Concat, [1]], # cat backbone P4
  32. [-1, 3, C3, [512, False]], # 13
  33. [-1, 1, Conv, [256, 1, 1]],
  34. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
  35. [[-1, 4], 1, Concat, [1]], # cat backbone P3
  36. [-1, 3, C3, [256, False]], # 17 (P3/8-small)
  37. [-1, 1, Conv, [256, 3, 2]],
  38. [[-1, 14], 1, Concat, [1]], # cat head P4
  39. [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
  40. [-1, 1, Conv, [512, 3, 2]],
  41. [[-1, 10], 1, Concat, [1]], # cat head P5
  42. [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
  43. [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
  44. ]

 3.2 各模块解读(代码+模型结构图)

 3.2.1 Conv

        简单的卷积模块,包含一个卷积层一个BatchNorm 一个激活函数

 

  1. def autopad(k, p=None): # kernel, padding
  2. # Pad to 'same'
  3. if p is None:
  4. p = k // 2 if isinstance(k, int) else (x // 2 for x in k) # auto-pad
  5. return p
  6. class Conv(nn.Module):
  7. # Standard convolution
  8. def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
  9. super().__init__()
  10. self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
  11. self.bn = nn.BatchNorm2d(c2)
  12. self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())
  13. def forward(self, x):
  14. return self.act(self.bn(self.conv(x)))
  15. def forward_fuse(self, x):
  16. return self.act(self.conv(x))

3.2.2 C3模块

3.2.2.1 Bottleneck

 

  1. class Bottleneck(nn.Module):
  2. # Standard bottleneck
  3. def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
  4. super().__init__()
  5. c_ = int(c2 * e) # hidden channels
  6. self.cv1 = Conv(c1, c_, 1, 1)
  7. self.cv2 = Conv(c_, c2, 3, 1, g=g)
  8. self.add = shortcut and c1 == c2
  9. def forward(self, x):
  10. return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))

 3.2.2.2 C3

        之前介绍到的depth_numbers就是在这里发挥作用的,n*depth_multiple 就是BotteleNeck的层数。

 

  1. class C3(nn.Module):
  2. # CSP Bottleneck with 3 convolutions
  3. def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
  4. super().__init__()
  5. c_ = int(c2 * e) # hidden channels
  6. self.cv1 = Conv(c1, c_, 1, 1)
  7. self.cv2 = Conv(c1, c_, 1, 1)
  8. self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2)
  9. self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
  10. # self.m = nn.Sequential(*(CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)))
  11. def forward(self, x):
  12. return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1))

3.2.3 SPPF 

 

  1. class SPPF(nn.Module):
  2. # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
  3. def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13))
  4. super().__init__()
  5. c_ = c1 // 2 # hidden channels
  6. self.cv1 = Conv(c1, c_, 1, 1)
  7. self.cv2 = Conv(c_ * 4, c2, 1, 1)
  8. self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)
  9. def forward(self, x):
  10. x = self.cv1(x)
  11. with warnings.catch_warnings():
  12. warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
  13. y1 = self.m(x)
  14. y2 = self.m(y1)
  15. y3 = self.m(y2)
  16. return self.cv2(torch.cat((x, y1, y2, y3, 1)))

3.2.4 Detect

        Detect层接收来自三个层级的特征图,给出最终预测 预测结果中包含 80个类的概率值,4个检测框信息,以及一个检测框置信度信息。

 

 

  1. class Detect(nn.Module):
  2. stride = None # strides computed during build
  3. onnx_dynamic = False # ONNX export parameter
  4. export = False # export mode
  5. def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer
  6. super().__init__()
  7. self.nc = nc # number of classes
  8. self.no = nc + 5 # number of outputs per anchor
  9. self.nl = len(anchors) # number of detection layers
  10. self.na = len(anchors[0]) // 2 # number of anchors
  11. self.grid = [torch.zeros(1)] * self.nl # init grid
  12. self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid
  13. self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2)
  14. self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
  15. self.inplace = inplace # use in-place ops (e.g. slice assignment)
  16. def forward(self, x):
  17. z = [] # inference output
  18. for i in range(self.nl):
  19. x[i] = self.m[i](x[i]) # conv
  20. bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
  21. x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
  22. if not self.training: # inference
  23. if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
  24. self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
  25. y = x[i].sigmoid()
  26. if self.inplace:
  27. y[..., 0:2] = (y[..., 0:2] * 2 + self.grid[i]) * self.stride[i] # xy 对于框坐标来说是添加offset来更新
  28. y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh 对于框大小来说是
  29. else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
  30. xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0
  31. xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy
  32. wh = (wh * 2) ** 2 * self.anchor_grid[i] # wh
  33. y = torch.cat((xy, wh, conf), 4)
  34. z.append(y.view(bs, -1, self.no))
  35. return x if self.training else (torch.cat(z, 1),) if self.export else (torch.cat(z, 1), x)

 3.3 模型整体架构

YOLOv5s模型的整体架构如下图所示,字有点小可以点开查看。

 

 

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/你好赵伟/article/detail/862040
推荐阅读
相关标签
  

闽ICP备14008679号