当前位置:   article > 正文

注意力机制在CNN中使用总结_注意力机制加在cnn的什么位置

注意力机制加在cnn的什么位置

目录

摘要

1、通道注意力机制和空间注意力机制

2、SE-Net: Squeeze-and-Excitation Networks

SE模块的实现

SE的另一种实现方式

3、轻量模块ECANet(通道注意力超强改进)

4、Coordinate Attention


摘要

计算机视觉(computer vision)中的注意力机制(attention)的基本思想就是想让系统学会注意力——能够忽略无关信息而关注重点信息。

注意力机制按照关注的域来分:

空间域(spatial domain)
通道域(channel domain)
层域(layer domain)
混合域(mixed domain)
时间域(time domain):还有另一种比较特殊的强注意力实现的注意力域,时间域(time domain),但是因为强注意力是使用reinforcement learning来实现的,训练起来有所不同

1、通道注意力机制和空间注意力机制

Convolutional Block Attention Module (CBAM) 表示卷积模块的注意力机制模块。是一种结合了空间(spatial)和通道(channel)的注意力机制模块。相比于senet只关注通道(channel)的注意力机制可以取得更好的效果。

通道注意力:将输入的featuremap,分别经过基于width和height的global max pooling 和global average pooling,然后分别经过MLP。将MLP输出的特征进行基于elementwise的加和操作,再经过sigmoid激活操作,生成最终的channel attention featuremap。将该channel attention featuremap和input featuremap做elementwise乘法操作,生成Spatial attention模块需要的输入特征。

空间注意力:将Channel attention模块输出的特征图作为本模块的输入特征图。首先做一个基于channel的global max pooling 和global average pooling,然后将这2个结果基于channel 做concat操作。然后经过一个卷积操作,降维为1个channel。再经过sigmoid生成spatial attention feature。最后将该feature和该模块的输入feature做乘法,得到最终生成的特征。

代码如下:

  1. import torch.nn as nn
  2. import math
  3. try:
  4. from torch.hub import load_state_dict_from_url
  5. except ImportError:
  6. from torch.utils.model_zoo import load_url as load_state_dict_from_url
  7. import torch
  8. #通道注意力机制
  9. class ChannelAttention(nn.Module):
  10. def __init__(self, in_planes, ratio=16):
  11. super(ChannelAttention, self).__init__()
  12. self.avg_pool = nn.AdaptiveAvgPool2d(1)
  13. self.max_pool = nn.AdaptiveMaxPool2d(1)
  14. self.fc1 = nn.Conv2d(in_planes, in_planes // 16, 1, bias=False)
  15. self.relu1 = nn.ReLU()
  16. self.fc2 = nn.Conv2d(in_planes // 16, in_planes, 1, bias=False)
  17. self.sigmoid = nn.Sigmoid()
  18. def forward(self, x):
  19. avg_out = self.fc2(self.relu1(self.fc1(self.avg_pool(x))))
  20. max_out = self.fc2(self.relu1(self.fc1(self.max_pool(x))))
  21. out = avg_out + max_out
  22. return self.sigmoid(out)
  23. #空间注意力机制
  24. class SpatialAttention(nn.Module):
  25. def __init__(self, kernel_size=7):
  26. super(SpatialAttention, self).__init__()
  27. assert kernel_size in (3, 7), 'kernel size must be 3 or 7'
  28. padding = 3 if kernel_size == 7 else 1
  29. self.conv1 = nn.Conv2d(2, 1, kernel_size, padding=padding, bias=False)
  30. self.sigmoid = nn.Sigmoid()
  31. def forward(self, x):
  32. avg_out = torch.mean(x, dim=1, keepdim=True)
  33. max_out, _ = torch.max(x, dim=1, keepdim=True)
  34. x = torch.cat([avg_out, max_out], dim=1)
  35. x = self.conv1(x)
  36. return self.sigmoid(x)

使用举例,在Resnet网络中添加注意力机制

  1. class ResNet(nn.Module):
  2. def __init__(self, block, layers, num_classes=1000, zero_init_residual=False,
  3. groups=1, width_per_group=64, replace_stride_with_dilation=None,
  4. norm_layer=None):
  5. super(ResNet, self).__init__()
  6. if norm_layer is None:
  7. norm_layer = nn.BatchNorm2d
  8. self._norm_layer = norm_layer
  9. self.inplanes = 64
  10. self.dilation = 1
  11. if replace_stride_with_dilation is None:
  12. # each element in the tuple indicates if we should replace
  13. # the 2x2 stride with a dilated convolution instead
  14. replace_stride_with_dilation = [False, False, False]
  15. if len(replace_stride_with_dilation) != 3:
  16. raise ValueError("replace_stride_with_dilation should be None "
  17. "or a 3-element tuple, got {}".format(replace_stride_with_dilation))
  18. self.groups = groups
  19. self.base_width = width_per_group
  20. self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3,
  21. bias=False)
  22. self.bn1 = norm_layer(self.inplanes)
  23. self.relu = nn.ReLU(inplace=True)
  24. # 网络的第一层加入注意力机制
  25. self.ca = ChannelAttention(self.inplanes)
  26. self.sa = SpatialAttention()
  27. self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
  28. self.layer1 = self._make_layer(block, 64, layers[0])
  29. self.layer2 = self._make_layer(block, 128, layers[1], stride=2,
  30. dilate=replace_stride_with_dilation[0])
  31. self.layer3 = self._make_layer(block, 256, layers[2], stride=2,
  32. dilate=replace_stride_with_dilation[1])
  33. self.layer4 = self._make_layer(block, 512, layers[3], stride=2,
  34. dilate=replace_stride_with_dilation[2])
  35. # 网络的卷积层的最后一层加入注意力机制
  36. self.ca1 = ChannelAttention(self.inplanes)
  37. self.sa1 = SpatialAttention()
  38. self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
  39. self.fc = nn.Linear(512 * block.expansion, num_classes)
  40. for m in self.modules():
  41. if isinstance(m, nn.Conv2d):
  42. nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
  43. elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
  44. nn.init.constant_(m.weight, 1)
  45. nn.init.constant_(m.bias, 0)
  46. # Zero-initialize the last BN in each residual branch,
  47. # so that the residual branch starts with zeros, and each residual block behaves like an identity.
  48. # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
  49. if zero_init_residual:
  50. for m in self.modules():
  51. if isinstance(m, Bottleneck):
  52. nn.init.constant_(m.bn3.weight, 0)
  53. elif isinstance(m, BasicBlock):
  54. nn.init.constant_(m.bn2.weight, 0)
  55. def _make_layer(self, block, planes, blocks, stride=1, dilate=False):
  56. norm_layer = self._norm_layer
  57. downsample = None
  58. previous_dilation = self.dilation
  59. if dilate:
  60. self.dilation *= stride
  61. stride = 1
  62. if stride != 1 or self.inplanes != planes * block.expansion:
  63. downsample = nn.Sequential(
  64. conv1x1(self.inplanes, planes * block.expansion, stride),
  65. norm_layer(planes * block.expansion),
  66. )
  67. layers = []
  68. layers.append(block(self.inplanes, planes, stride, downsample, self.groups,
  69. self.base_width, previous_dilation, norm_layer))
  70. self.inplanes = planes * block.expansion
  71. for _ in range(1, blocks):
  72. layers.append(block(self.inplanes, planes, groups=self.groups,
  73. base_width=self.base_width, dilation=self.dilation,
  74. norm_layer=norm_layer))
  75. return nn.Sequential(*layers)
  76. def forward(self, x):
  77. x = self.conv1(x)
  78. x = self.bn1(x)
  79. x = self.relu(x)
  80. x = self.ca(x) * x
  81. x = self.sa(x) * x
  82. x = self.maxpool(x)
  83. x = self.layer1(x)
  84. x = self.layer2(x)
  85. x = self.layer3(x)
  86. x = self.layer4(x)
  87. x = self.ca1(x) * x
  88. x = self.sa1(x) * x
  89. x = self.avgpool(x)
  90. x = x.reshape(x.size(0), -1)
  91. x = self.fc(x)
  92. return x

注意点:因为不能改变ResNet的网络结构,所以CBAM不能加在block里面,因为加进去网络结构发生了变化,所以不能用预训练参数。加在最后一层卷积和第一层卷积不改变网络,可以用预训练参数。

添加位置:

  1. # 网络的第一层加入注意力机制
  2. self.ca = ChannelAttention(self.inplanes)
  3. self.sa = SpatialAttention()

  1. # 网络的卷积层的最后一层加入注意力机制
  2. self.ca1 = ChannelAttention(self.inplanes)
  3. self.sa1 = SpatialAttention()

forWord部分代码

  1. x = self.ca(x) * x
  2. x = self.sa(x) * x
  3. x = self.maxpool(x)
  4. x = self.layer1(x)
  5. x = self.layer2(x)
  6. x = self.layer3(x)
  7. x = self.layer4(x)
  8. x = self.ca1(x) * x
  9. x = self.sa1(x) * x

或者:

  1. import torch
  2. import torch.nn as nn
  3. import torchvision
  4. class ChannelAttentionModule(nn.Module):
  5. def __init__(self, channel, ratio=16):
  6. super(ChannelAttentionModule, self).__init__()
  7. self.avg_pool = nn.AdaptiveAvgPool2d(1)
  8. self.max_pool = nn.AdaptiveMaxPool2d(1)
  9. self.shared_MLP = nn.Sequential(
  10. nn.Conv2d(channel, channel // ratio, 1, bias=False),
  11. nn.ReLU(),
  12. nn.Conv2d(channel // ratio, channel, 1, bias=False)
  13. )
  14. self.sigmoid = nn.Sigmoid()
  15. def forward(self, x):
  16. avgout = self.shared_MLP(self.avg_pool(x))
  17. maxout = self.shared_MLP(self.max_pool(x))
  18. return self.sigmoid(avgout + maxout)
  19. class SpatialAttentionModule(nn.Module):
  20. def __init__(self):
  21. super(SpatialAttentionModule, self).__init__()
  22. self.conv2d = nn.Conv2d(in_channels=2, out_channels=1, kernel_size=7, stride=1, padding=3)
  23. self.sigmoid = nn.Sigmoid()
  24. def forward(self, x):
  25. avgout = torch.mean(x, dim=1, keepdim=True)
  26. maxout, _ = torch.max(x, dim=1, keepdim=True)
  27. out = torch.cat([avgout, maxout], dim=1)
  28. out = self.sigmoid(self.conv2d(out))
  29. return out
  30. class CBAM(nn.Module):
  31. def __init__(self, channel):
  32. super(CBAM, self).__init__()
  33. self.channel_attention = ChannelAttentionModule(channel)
  34. self.spatial_attention = SpatialAttentionModule()
  35. def forward(self, x):
  36. out = self.channel_attention(x) * x
  37. out = self.spatial_attention(out) * out
  38. return out
  39. class ResBlock_CBAM(nn.Module):
  40. def __init__(self,in_places, places, stride=1,downsampling=False, expansion = 4):
  41. super(ResBlock_CBAM,self).__init__()
  42. self.expansion = expansion
  43. self.downsampling = downsampling
  44. self.bottleneck = nn.Sequential(
  45. nn.Conv2d(in_channels=in_places,out_channels=places,kernel_size=1,stride=1, bias=False),
  46. nn.BatchNorm2d(places),
  47. nn.ReLU(inplace=True),
  48. nn.Conv2d(in_channels=places, out_channels=places, kernel_size=3, stride=stride, padding=1, bias=False),
  49. nn.BatchNorm2d(places),
  50. nn.ReLU(inplace=True),
  51. nn.Conv2d(in_channels=places, out_channels=places*self.expansion, kernel_size=1, stride=1, bias=False),
  52. nn.BatchNorm2d(places*self.expansion),
  53. )
  54. self.cbam = CBAM(channel=places*self.expansion)
  55. if self.downsampling:
  56. self.downsample = nn.Sequential(
  57. nn.Conv2d(in_channels=in_places, out_channels=places*self.expansion, kernel_size=1, stride=stride, bias=False),
  58. nn.BatchNorm2d(places*self.expansion)
  59. )
  60. self.relu = nn.ReLU(inplace=True)
  61. def forward(self, x):
  62. residual = x
  63. out = self.bottleneck(x)
  64. out = self.cbam(out)
  65. if self.downsampling:
  66. residual = self.downsample(x)
  67. out += residual
  68. out = self.relu(out)
  69. return out
  70. if __name__=='__main__':
  71. model = ResBlock_CBAM(in_places=16, places=4)
  72. print(model)
  73. input = torch.randn(1, 16, 64, 64)
  74. out = model(input)
  75. print(out.shape)

 

2、SE-Net: Squeeze-and-Excitation Networks


论文链接:https://arxiv.org/abs/1709.01507
代码地址:https://github.com/hujie-frank/SENet
PyTorch代码地址:https://github.com/miraclewkf/SENet-PyTorch

SE-Net赢得了最后一届ImageNet 2017竞赛分类任务的冠军,其基本原理是对于每个输出channel,预测一个常数权重,对每个channel加权一下。结构如下图:

第一步每个通道H*W个数全局平均池化得到一个标量,称之为Squeeze,然后两个FC得到01之间的一个权重值,对原始的每个HxW的每个元素乘以对应通道的权重,得到新的feature map,称之为Excitation。任意的原始网络结构,都可以通过这个Squeeze-Excitation的方式进行feature recalibration,如下图。

   

具体实现上就是一个Global Average Pooling-FC-ReLU-FC-Sigmoid,第一层的FC会把通道降下来,然后第二层FC再把通道升上去,得到和通道数相同的C个权重,每个权重用于给对应的一个通道进行加权。上图中的r就是缩减系数,实验确定选取16,可以得到较好的性能并且计算量相对较小。SENet的核心思想在于通过网络根据loss去学习特征权重,使得有效的feature map权重大,无效或效果小的feature map权重小的方式训练模型达到更好的结果

SE模块的实现

这里给出PyTorch版本的实现(参考senet.pytorch):

  1. class SELayer(nn.Module):
  2. def __init__(self, channel, reduction=16):
  3. super(SELayer, self).__init__()
  4. self.avg_pool = nn.AdaptiveAvgPool2d(1)
  5. self.fc = nn.Sequential(
  6. nn.Linear(channel, channel // reduction, bias=False),
  7. nn.ReLU(inplace=True),
  8. nn.Linear(channel // reduction, channel, bias=False),
  9. nn.Sigmoid()
  10. )
  11. def forward(self, x):
  12. b, c, _, _ = x.size()
  13. y = self.avg_pool(x).view(b, c)
  14. y = self.fc(y).view(b, c, 1, 1)
  15. return x * y.expand_as(x)

将SE模块用在Resnet网络,只需要将SE模块加入到残差单元(应用在残差学习那一部分)就可以:

  1. class SEBottleneck(nn.Module):
  2. expansion = 4
  3. def __init__(self, inplanes, planes, stride=1, downsample=None, reduction=16):
  4. super(SEBottleneck, self).__init__()
  5. self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
  6. self.bn1 = nn.BatchNorm2d(planes)
  7. self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
  8. padding=1, bias=False)
  9. self.bn2 = nn.BatchNorm2d(planes)
  10. self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
  11. self.bn3 = nn.BatchNorm2d(planes * 4)
  12. self.relu = nn.ReLU(inplace=True)
  13. self.se = SELayer(planes * 4, reduction)
  14. self.downsample = downsample
  15. self.stride = stride
  16. def forward(self, x):
  17. residual = x
  18. out = self.conv1(x)
  19. out = self.bn1(out)
  20. out = self.relu(out)
  21. out = self.conv2(out)
  22. out = self.bn2(out)
  23. out = self.relu(out)
  24. out = self.conv3(out)
  25. out = self.bn3(out)
  26. out = self.se(out)
  27. if self.downsample is not None:
  28. residual = self.downsample(x)
  29. out += residual
  30. out = self.relu(out)
  31. return out

SE的另一种实现方式

该方式使用卷积替代全连接。

  1. class SEBlock(nn.Module):
  2.     def __init__(self, input_channels, internal_neurons):
  3.         super(SEBlock, self).__init__()
  4.         self.down = nn.Conv2d(in_channels=input_channels, out_channels=internal_neurons, kernel_size=1, stride=1,
  5.                               bias=True, padding_mode='same')
  6.         self.up = nn.Conv2d(in_channels=internal_neurons, out_channels=input_channels, kernel_size=1, stride=1,
  7.                             bias=True, padding_mode='same')
  8.     def forward(self, inputs):
  9.         x = F.avg_pool2d(inputs, kernel_size=inputs.size(3))
  10.         x = self.down(x)
  11.         x = F.leaky_relu(x)
  12.         x = self.up(x)
  13.         x = F.sigmoid(x)
  14.         x = x.repeat(1, 1, inputs.size(2), inputs.size(3))
  15.         return inputs * x

3、轻量模块ECANet(通道注意力超强改进)

论文链接:https://arxiv.org/abs/1910.03151

代码地址:https://github.com/BangguWu/ECANet

论文翻译:https://wanghao.blog.csdn.net/article/details/113073026

ECANet主要对SENet模块进行了一些改进,提出了一种不降维的局部跨信道交互策略(ECA模块)和自适应选择一维卷积核大小的方法,从而实现了性能上的提优。

ECANet的实现:

  1. class eca_layer(nn.Module):
  2. """Constructs a ECA module.
  3. Args:
  4. channel: Number of channels of the input feature map
  5. k_size: Adaptive selection of kernel size
  6. """
  7. def __init__(self, channel, k_size=3):
  8. super(eca_layer, self).__init__()
  9. self.avg_pool = nn.AdaptiveAvgPool2d(1)
  10. self.conv = nn.Conv1d(1, 1, kernel_size=k_size, padding=(k_size - 1) // 2, bias=False)
  11. self.sigmoid = nn.Sigmoid()
  12. def forward(self, x):
  13. # x: input features with shape [b, c, h, w]
  14. b, c, h, w = x.size()
  15. # feature descriptor on the global spatial information
  16. y = self.avg_pool(x)
  17. # Two different branches of ECA module
  18. y = self.conv(y.squeeze(-1).transpose(-1, -2)).transpose(-1, -2).unsqueeze(-1)
  19. # Multi-scale information fusion
  20. y = self.sigmoid(y)
  21. return x * y.expand_as(x)

ECANet在模型中的调用

  1. channelNum=64
  2. class CRBlock(nn.Module):
  3. def __init__(self):
  4. super(CRBlock, self).__init__()
  5. self.convban = nn.Sequential(OrderedDict([
  6. ("conv3x3_bn", ConvBN(channelNum, channelNum, 3)),
  7. ]))
  8. self.path1 = Encoder_conv(channelNum, 2)
  9. self.path2 = nn.Sequential(OrderedDict([
  10. ('conv1x5', ConvBN(channelNum, channelNum, [1, 3])),
  11. ('conv5x1', ConvBN(channelNum, channelNum, 3)),
  12. ('ac', ACBlock(channelNum, channelNum, kernel_size=3)),
  13. ('eca', eca_layer(channelNum, 3)),
  14. # ('ac', ACBlock(channelNum, channelNum, kernel_size=3)),
  15. ]))
  16. self.path2 = nn.Sequential(OrderedDict([
  17. ('conv1x5', ConvBN(channelNum, channelNum, [1, 5])),
  18. ('conv5x1', ConvBN(channelNum, channelNum, [5, 1])),
  19. ("conv9x1_bn", ConvBN(channelNum, channelNum, 1)),
  20. ('eca', eca_layer(channelNum, 3)),
  21. ]))
  22. self.encoder_conv = Encoder_conv(channelNum * 4)
  23. self.encoder_conv1 = ConvBN(channelNum * 4, channelNum, 1)
  24. self.identity = nn.Identity()
  25. self.relu = Mish()
  26. self.ca1 = eca_layer(channelNum * 4, 3)
  27. # self.ca2 = eca_layer(channelNum*4, 1)
  28. def forward(self, x):
  29. identity = self.identity(x)
  30. x = self.convban(x)
  31. out1 = self.path1(x)
  32. out2 = self.path2(x)
  33. out3 = self.path2(x)
  34. out = torch.cat((out1, out2, out3, x), dim=1)
  35. out = self.relu(out)
  36. out = self.encoder_conv(out)
  37. out = self.ca1(out)
  38. out = self.encoder_conv1(out)
  39. out = self.relu(out + identity)
  40. return out

4、Coordinate Attention

论文:https://arxiv.org/abs/2103.02907

代码链接:https://github.com/Andrew-Qibin/CoordAttention

Coordinate Attention通过精确的位置信息对通道关系和长期依赖性进行编码,具体操作分为Coordinate信息嵌入Coordinate Attention生成2个步骤。

网络结构如下图:

图片

详见:CVPR 2021 | 即插即用! CA:新注意力机制,助力分类/检测/分割涨点!

Coordinate Attention的pytorch实现。

  1. import torch
  2. from torch import nn
  3. class CA_Block(nn.Module):
  4. def __init__(self, channel, h, w, reduction=16):
  5. super(CA_Block, self).__init__()
  6. self.h = h
  7. self.w = w
  8. self.avg_pool_x = nn.AdaptiveAvgPool2d((h, 1))
  9. self.avg_pool_y = nn.AdaptiveAvgPool2d((1, w))
  10. self.conv_1x1 = nn.Conv2d(in_channels=channel, out_channels=channel//reduction, kernel_size=1, stride=1, bias=False)
  11. self.relu = nn.ReLU()
  12. self.bn = nn.BatchNorm2d(channel//reduction)
  13. self.F_h = nn.Conv2d(in_channels=channel//reduction, out_channels=channel, kernel_size=1, stride=1, bias=False)
  14. self.F_w = nn.Conv2d(in_channels=channel//reduction, out_channels=channel, kernel_size=1, stride=1, bias=False)
  15. self.sigmoid_h = nn.Sigmoid()
  16. self.sigmoid_w = nn.Sigmoid()
  17. def forward(self, x):
  18. x_h = self.avg_pool_x(x).permute(0, 1, 3, 2)
  19. x_w = self.avg_pool_y(x)
  20. x_cat_conv_relu = self.relu(self.conv_1x1(torch.cat((x_h, x_w), 3)))
  21. x_cat_conv_split_h, x_cat_conv_split_w = x_cat_conv_relu.split([self.h, self.w], 3)
  22. s_h = self.sigmoid_h(self.F_h(x_cat_conv_split_h.permute(0, 1, 3, 2)))
  23. s_w = self.sigmoid_w(self.F_w(x_cat_conv_split_w))
  24. out = x * s_h.expand_as(x) * s_w.expand_as(x)
  25. return out
  26. if __name__ == '__main__':
  27. x = torch.randn(1, 16, 128, 64) # b, c, h, w
  28. ca_model = CA_Block(channel=16, h=128, w=64)
  29. y = ca_model(x)
  30. print(y.shape)

参考文章:

pytorch中加入注意力机制(CBAM),以ResNet为例。解析到底要不要用ImageNet预训练?如何加预训练参数? - 知乎

注意力机制总结senet cbam ecanet scnet gcnet_DRACO于的博客-CSDN博客_ecanet注意力机制

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小丑西瓜9/article/detail/237536
推荐阅读
相关标签
  

闽ICP备14008679号