当前位置:   article > 正文

Keras复现CBAM注意力模块_keras cbam

keras cbam

1. CBAM注意力机制模块介绍

CBAM(Convolutional Block Attention Module)拥有两个注意力子模块,CAM(Channel Attention Module)和SAM(Spatial Attention Module)。CAM负责通道(Channel)上的注意力权重,SAM负责空间(Height, Width)上的注意力权重。
CBAM模块
以下为CAM与SAM结构图:
CAM与SAM模块

2.模块复现

2.1 Keras代码复现

import numpy as np
import tensorflow as tf
import keras
import keras.backend as K
import keras.layers as KL

# 判断输入数据格式,是channels_first还是channels_last
channel_axis = 1 if K.image_data_format() == "channels_first" else 3

# CAM
def channel_attention(input_xs, reduction_ratio=0.125):
    # get channel
    channel = int(input_xs.shape[channel_axis])
    maxpool_channel = KL.GlobalMaxPooling2D()(input_xs)
    maxpool_channel = KL.Reshape((1, 1, channel))(maxpool_channel)
    avgpool_channel = KL.GlobalAvgPool2D()(input_xs)
    avgpool_channel = KL.Reshape((1, 1, channel))(avgpool_channel)
    Dense_One = KL.Dense(units=int(channel * reduction_ratio), activation='relu', kernel_initializer='he_normal', use_bias=True, bias_initializer='zeros')
    Dense_Two = KL.Dense(units=int(channel), activation='relu', kernel_initializer='he_normal', use_bias=True, bias_initializer='zeros')
    # max path
    mlp_1_max = Dense_One(maxpool_channel)
    mlp_2_max = Dense_Two(mlp_1_max)
    mlp_2_max = KL.Reshape(target_shape=(1, 1, int(channel)))(mlp_2_max)
    # avg path
    mlp_1_avg = Dense_One(avgpool_channel)
    mlp_2_avg = Dense_Two(mlp_1_avg)
    mlp_2_avg = KL.Reshape(target_shape=(1, 1, int(channel)))(mlp_2_avg)
    channel_attention_feature = KL.Add()([mlp_2_max, mlp_2_avg])
    channel_attention_feature = KL.Activation('sigmoid')(channel_attention_feature)
    return KL.Multiply()([channel_attention_feature, input_xs])

# SAM
def spatial_attention(channel_refined_feature):
    maxpool_spatial = KL.Lambda(lambda x: K.max(x, axis=3, keepdims=True))(channel_refined_feature)
    avgpool_spatial = KL.Lambda(lambda x: K.mean(x, axis=3, keepdims=True))(channel_refined_feature)
    max_avg_pool_spatial = KL.Concatenate(axis=3)([maxpool_spatial, avgpool_spatial])
    return KL.Conv2D(filters=1, kernel_size=(3, 3), padding="same", activation='sigmoid', kernel_initializer='he_normal', use_bias=False)(max_avg_pool_spatial)


def cbam_module(input_xs, reduction_ratio=0.5):
    channel_refined_feature = channel_attention(input_xs, reduction_ratio=reduction_ratio)
    spatial_attention_feature = spatial_attention(channel_refined_feature)
    refined_feature = KL.Multiply()([channel_refined_feature, spatial_attention_feature])
    return KL.Add()([refined_feature, input_xs])
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44

2.2 测试

张量尺寸不变,但特征图的每个点的权值会被注意力模块调整,被训练后的注意力模块会对关注度高的范围的点提升权值。该测试只用于检查尺寸是否设计准确,具体效果需要将模块嵌入CNN中训练测试。

# 使用numpy模拟一个真实图片的尺寸
input_xs = np.ones([2, 256, 256, 3], dtype='float32') * 0.5
# numpy转Tensor
input_xs = tf.convert_to_tensor(input_xs)
print(input_xs.shape) # output: (2, 256, 256, 3)
outputs = cbam_module(input_xs)
print(outputs.shape) # output: (2, 256, 256, 3)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

3. 知识点补充

3.1 全局平均/最大池化

全局池化就是将所有通道的矩阵根据对应池化操作变成一个1x1的张量。假如输入尺寸格式[batch_size, height, weight, channel],输入为[3, 4, 4, 3], 那么经过全局池化后的输出是[3, 1, 1, 3]。
全局平均/最大池化

3.2 Keras构建模型

一个刚入门Keras很关键的点,需要模型的每一层都是从keras.layers中引用的类。例如想要实现concat操作需要使用keras.layers.Concatenate而不是keras.backend.concatenate。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Guff_9hys/article/detail/757508
推荐阅读
相关标签
  

闽ICP备14008679号