赞
踩
CBAM(Convolutional Block Attention Module)是一种简单而有效的注意力机制模块,旨在增强卷积神经网络(CNN)的表现力。该模块通过引入两个独立的注意力机制——通道注意力和空间注意力——来适应性地精炼特征图,从而提升模型的整体性能。
通道注意力关注于“哪些”特征应该被强调或抑制。它基于全局信息,通过平均池化和最大池化操作从特征图中提取两种类型的全局描述符。这两种描述符分别捕捉了特征图中不同通道的重要性和分布情况。之后,这些描述符被传递给一系列卷积层,最终生成一个与输入特征图通道数相同的注意力图。此注意力图被乘以原始特征图,从而实现对不同通道特征的选择性放大或减弱。
空间注意力则关注于“哪里”的问题,即特征图中的哪些位置应该被强调或抑制。它通过分析特征图的局部模式来生成一个二维的注意力图。该过程首先计算特征图中每个位置的平均值和最大值,然后将这两个统计量连接起来并传递给一个卷积层,生成一个单一的二维注意力图。这个注意力图同样被乘以原始特征图,使得模型能够聚焦于关键区域。
CBAM模块依次应用通道注意力和空间注意力机制。具体来说:
最后,通道注意力图和空间注意力图分别与输入特征图相乘,以实现特征的精炼。CBAM模块轻量化且通用,可以无缝集成到任何卷积神经网络架构中,并且整个模块可以与基础CNN一起端到端训练。
通过在ImageNet-1K、MS COCO检测和VOC 2007检测数据集上的广泛实验,验证了CBAM的有效性。实验结果显示,CBAM能够显著提高各种基准模型的分类和检测性能,而且这种性能提升不是因为增加了大量的额外参数,而是因为更有效地利用了现有的特征。此外,使用轻量级骨干网络时,CBAM也显示出了良好的适用性,这意味着它对于资源受限的设备同样有益。
总之,CBAM是一种强大的注意力机制,能够帮助CNN更好地聚焦于目标对象,从而提高其识别和定位能力。
首先在YOLOv5/v7的models文件夹下新建文件moreattention.py,导入如下代码
- from models.common import *
-
-
- # CBAM注意力机制 https://arxiv.org/pdf/1807.06521
- class ChannelAttention(nn.Module):
- def __init__(self, in_planes, ratio=16):
- super(ChannelAttention, self).__init__()
- self.avg_pool = nn.AdaptiveAvgPool2d(1)
- self.max_pool = nn.AdaptiveMaxPool2d(1)
- self.f1 = nn.Conv2d(in_planes, in_planes // ratio, 1, bias=False)
- self.relu = nn.ReLU()
- self.f2 = nn.Conv2d(in_planes // ratio, in_planes, 1, bias=False)
- self.sigmoid = nn.Sigmoid()
-
- def forward(self, x):
- avg_out = self.f2(self.relu(self.f1(self.avg_pool(x))))
- max_out = self.f2(self.relu(self.f1(self.max_pool(x))))
- out = self.sigmoid(avg_out + max_out)
- return out
-
-
- class SpatialAttention(nn.Module):
- def __init__(self, kernel_size=7):
- super(SpatialAttention, self).__init__()
- assert kernel_size in (3, 7), 'kernel size must be 3 or 7'
- padding = 3 if kernel_size == 7 else 1
- self.conv = nn.Conv2d(2, 1, kernel_size, padding=padding, bias=False)
- self.sigmoid = nn.Sigmoid()
-
- def forward(self, x):
- avg_out = torch.mean(x, dim=1, keepdim=True)
- max_out, _ = torch.max(x, dim=1, keepdim=True)
- x = torch.cat([avg_out, max_out], dim=1)
- x = self.conv(x)
- return self.sigmoid(x)
-
-
- class CBAM(nn.Module):
- # Standard convolution
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
- super(CBAM, self).__init__()
- self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
- self.bn = nn.BatchNorm2d(c2)
- self.act = nn.LeakyReLU(0.1) if act else nn.Identity()
- self.ca = ChannelAttention(c2)
- self.sa = SpatialAttention()
-
- def forward(self, x):
- x = self.act(self.bn(self.conv(x)))
- x = self.ca(x) * x
- x = self.sa(x) * x
- return x
-
- def fuseforward(self, x):
- return self.act(self.conv(x))
其次在在YOLOv5/v7项目文件下的models/yolo.py中在文件首部添加代码
from models.moreattention import CBAM
并搜索def parse_model(d, ch)
定位到如下行添加以下代码
CBAM,
完成二后,在YOLOv7项目文件下的models文件夹下创建新的文件yolov7-tiny-cbam.yaml,导入如下代码。
- # parameters
- nc: 80 # number of classes
- depth_multiple: 1.0 # model depth multiple
- width_multiple: 1.0 # layer channel multiple
-
- # anchors
- anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
- # yolov7-tiny backbone
- backbone:
- # [from, number, module, args] c2, k=1, s=1, p=None, g=1, act=True
- [[-1, 1, Conv, [32, 3, 2, None, 1, nn.LeakyReLU(0.1)]], # 0-P1/2
-
- [-1, 1, Conv, [64, 3, 2, None, 1, nn.LeakyReLU(0.1)]], # 1-P2/4
-
- [-1, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 7
-
- [-1, 1, MP, []], # 8-P3/8
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 14
-
- [-1, 1, MP, []], # 15-P4/16
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 21
-
- [-1, 1, MP, []], # 22-P5/32
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [512, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 28
- ]
-
- # yolov7-tiny head
- head:
- [[-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, SP, [5]],
- [-2, 1, SP, [9]],
- [-3, 1, SP, [13]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -7], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 37
-
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [21, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 47
-
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [14, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 57
-
- [-1, 1, Conv, [128, 3, 2, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, 47], 1, Concat, [1]],
-
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 65
-
- [-1, 1, Conv, [256, 3, 2, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, 37], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 73
-
- [-1, 1, CBAM, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 74
-
- [57, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [65, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [74, 1, Conv, [512, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
-
- [[75,76,77], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
-
- from n params module arguments
- 0 -1 1 928 models.common.Conv [3, 32, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
- 1 -1 1 18560 models.common.Conv [32, 64, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
- 2 -1 1 2112 models.common.Conv [64, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 3 -2 1 2112 models.common.Conv [64, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 4 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 5 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 6 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 7 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 8 -1 1 0 models.common.MP []
- 9 -1 1 4224 models.common.Conv [64, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 10 -2 1 4224 models.common.Conv [64, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 11 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 12 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 13 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 14 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 15 -1 1 0 models.common.MP []
- 16 -1 1 16640 models.common.Conv [128, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 17 -2 1 16640 models.common.Conv [128, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 18 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 19 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 20 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 21 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 22 -1 1 0 models.common.MP []
- 23 -1 1 66048 models.common.Conv [256, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 24 -2 1 66048 models.common.Conv [256, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 25 -1 1 590336 models.common.Conv [256, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 26 -1 1 590336 models.common.Conv [256, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 27 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 28 -1 1 525312 models.common.Conv [1024, 512, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 29 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 30 -2 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 31 -1 1 0 models.common.SP [5]
- 32 -2 1 0 models.common.SP [9]
- 33 -3 1 0 models.common.SP [13]
- 34 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 35 -1 1 262656 models.common.Conv [1024, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 36 [-1, -7] 1 0 models.common.Concat [1]
- 37 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 38 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 39 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
- 40 21 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 41 [-1, -2] 1 0 models.common.Concat [1]
- 42 -1 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 43 -2 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 44 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 45 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 46 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 47 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 48 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 49 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
- 50 14 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 51 [-1, -2] 1 0 models.common.Concat [1]
- 52 -1 1 4160 models.common.Conv [128, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 53 -2 1 4160 models.common.Conv [128, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 54 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 55 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 56 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 57 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 58 -1 1 73984 models.common.Conv [64, 128, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
- 59 [-1, 47] 1 0 models.common.Concat [1]
- 60 -1 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 61 -2 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 62 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 63 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 64 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 65 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 66 -1 1 295424 models.common.Conv [128, 256, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
- 67 [-1, 37] 1 0 models.common.Concat [1]
- 68 -1 1 65792 models.common.Conv [512, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 69 -2 1 65792 models.common.Conv [512, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 70 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 71 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 72 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
- 73 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 74 -1 1 74338 models.moreattention.CBAM [256, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 75 57 1 73984 models.common.Conv [64, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 76 65 1 295424 models.common.Conv [128, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 77 74 1 1180672 models.common.Conv [256, 512, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
- 78 [75, 76, 77] 1 17132 models.yolo.IDetect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
-
- Model Summary: 277 layers, 6089326 parameters, 6089326 gradients, 13.2 GFLOPS
运行后若打印出如上文本代表改进成功。
完成二后,在YOLOv5项目文件下的models文件夹下创建新的文件yolov5s-cbam.yaml,导入如下代码。
- # Parameters
- nc: 1 # number of classes
- depth_multiple: 0.33 # model depth multiple
- width_multiple: 0.50 # layer channel multiple
- anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
- # YOLOv5 v6.0 backbone
- backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
- [-1, 3, C3, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
- [-1, 6, C3, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
- [-1, 9, C3, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
- [-1, 3, C3, [1024]],
- [-1, 1, SPPF, [1024, 5]], # 9
- ]
-
- # YOLOv5 v6.0 head
- head:
- [[-1, 1, Conv, [512, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 6], 1, Concat, [1]], # cat backbone P4
- [-1, 3, C3, [512, False]], # 13
-
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 4], 1, Concat, [1]], # cat backbone P3
- [-1, 3, C3, [256, False]], # 17 (P3/8-small)
-
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, 14], 1, Concat, [1]], # cat head P4
- [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
-
- [-1, 1, Conv, [512, 3, 2]],
- [[-1, 10], 1, Concat, [1]], # cat head P5
- [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
- [-1, 1, CBAM, [1024]],# 24 (P5/32-large)+attention
-
- [[17, 20, 24], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
-
- from n params module arguments
- 0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2]
- 1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
- 2 -1 1 18816 models.common.C3 [64, 64, 1]
- 3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
- 4 -1 2 115712 models.common.C3 [128, 128, 2]
- 5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
- 6 -1 3 625152 models.common.C3 [256, 256, 3]
- 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
- 8 -1 1 1182720 models.common.C3 [512, 512, 1]
- 9 -1 1 656896 models.common.SPPF [512, 512, 5]
- 10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
- 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
- 12 [-1, 6] 1 0 models.common.Concat [1]
- 13 -1 1 361984 models.common.C3 [512, 256, 1, False]
- 14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
- 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
- 16 [-1, 4] 1 0 models.common.Concat [1]
- 17 -1 1 90880 models.common.C3 [256, 128, 1, False]
- 18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
- 19 [-1, 14] 1 0 models.common.Concat [1]
- 20 -1 1 296448 models.common.C3 [256, 256, 1, False]
- 21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
- 22 [-1, 10] 1 0 models.common.Concat [1]
- 23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
- 24 -1 1 296034 models.moreattention.CBAM [512, 512]
- 25 [17, 20, 24] 1 16182 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
-
- Model Summary: 284 layers, 7318360 parameters, 7318360 gradients, 16.2 GFLOPs
运行后若打印出如上文本代表改进成功。
完成二后,在YOLOv5项目文件下的models文件夹下创建新的文件yolov5n-cbam.yaml,导入如下代码。
- # Parameters
- nc: 1 # number of classes
- depth_multiple: 0.33 # model depth multiple
- width_multiple: 0.25 # layer channel multiple
- anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
- # YOLOv5 v6.0 backbone
- backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
- [-1, 3, C3, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
- [-1, 6, C3, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
- [-1, 9, C3, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
- [-1, 3, C3, [1024]],
- [-1, 1, SPPF, [1024, 5]], # 9
- ]
-
- # YOLOv5 v6.0 head
- head:
- [[-1, 1, Conv, [512, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 6], 1, Concat, [1]], # cat backbone P4
- [-1, 3, C3, [512, False]], # 13
-
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 4], 1, Concat, [1]], # cat backbone P3
- [-1, 3, C3, [256, False]], # 17 (P3/8-small)
-
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, 14], 1, Concat, [1]], # cat head P4
- [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
-
- [-1, 1, Conv, [512, 3, 2]],
- [[-1, 10], 1, Concat, [1]], # cat head P5
- [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
- [-1, 1, CBAM, [1024]],# 24 (P5/32-large)+attention
-
- [[17, 20, 24], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
-
- from n params module arguments
- 0 -1 1 1760 models.common.Conv [3, 16, 6, 2, 2]
- 1 -1 1 4672 models.common.Conv [16, 32, 3, 2]
- 2 -1 1 4800 models.common.C3 [32, 32, 1]
- 3 -1 1 18560 models.common.Conv [32, 64, 3, 2]
- 4 -1 2 29184 models.common.C3 [64, 64, 2]
- 5 -1 1 73984 models.common.Conv [64, 128, 3, 2]
- 6 -1 3 156928 models.common.C3 [128, 128, 3]
- 7 -1 1 295424 models.common.Conv [128, 256, 3, 2]
- 8 -1 1 296448 models.common.C3 [256, 256, 1]
- 9 -1 1 164608 models.common.SPPF [256, 256, 5]
- 10 -1 1 33024 models.common.Conv [256, 128, 1, 1]
- 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
- 12 [-1, 6] 1 0 models.common.Concat [1]
- 13 -1 1 90880 models.common.C3 [256, 128, 1, False]
- 14 -1 1 8320 models.common.Conv [128, 64, 1, 1]
- 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
- 16 [-1, 4] 1 0 models.common.Concat [1]
- 17 -1 1 22912 models.common.C3 [128, 64, 1, False]
- 18 -1 1 36992 models.common.Conv [64, 64, 3, 2]
- 19 [-1, 14] 1 0 models.common.Concat [1]
- 20 -1 1 74496 models.common.C3 [128, 128, 1, False]
- 21 -1 1 147712 models.common.Conv [128, 128, 3, 2]
- 22 [-1, 10] 1 0 models.common.Concat [1]
- 23 -1 1 296448 models.common.C3 [256, 256, 1, False]
- 24 -1 1 74338 models.moreattention.CBAM [256, 256]
- 25 [17, 20, 24] 1 8118 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [64, 128, 256]]
-
- Model Summary: 284 layers, 1839608 parameters, 1839608 gradients, 4.3 GFLOPs
运行后打印如上代码说明改进成功。
还有一些可添加的位置,常见可分为骨干和颈部,可作用于局部或全局。
本文只是一个示例修改,实际上还可以将注意力机制添加在更多地方,另外需要注意的是
第二步中self.act = nn.LeakyReLU(0.1) if act else nn.Identity()适用于YOLOv7-tiny,若你采用的是YOLOv5或YOLOv7,则需要修改为SiLU(),即self.act = nn.SiLU() if act else nn.Identity()
注意力机制能加在哪?会在下一篇文章中具体阐述给出,敬请关注。
更多文章产出中,主打简洁和准确,欢迎关注我,共同探讨!
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。