赞
踩
CBAM(Convolutional Block Attention Module)拥有两个注意力子模块,CAM(Channel Attention Module)和SAM(Spatial Attention Module)。CAM负责通道(Channel)上的注意力权重,SAM负责空间(Height, Width)上的注意力权重。
以下为CAM与SAM结构图:
import numpy as np import tensorflow as tf import keras import keras.backend as K import keras.layers as KL # 判断输入数据格式,是channels_first还是channels_last channel_axis = 1 if K.image_data_format() == "channels_first" else 3 # CAM def channel_attention(input_xs, reduction_ratio=0.125): # get channel channel = int(input_xs.shape[channel_axis]) maxpool_channel = KL.GlobalMaxPooling2D()(input_xs) maxpool_channel = KL.Reshape((1, 1, channel))(maxpool_channel) avgpool_channel = KL.GlobalAvgPool2D()(input_xs) avgpool_channel = KL.Reshape((1, 1, channel))(avgpool_channel) Dense_One = KL.Dense(units=int(channel * reduction_ratio), activation='relu', kernel_initializer='he_normal', use_bias=True, bias_initializer='zeros') Dense_Two = KL.Dense(units=int(channel), activation='relu', kernel_initializer='he_normal', use_bias=True, bias_initializer='zeros') # max path mlp_1_max = Dense_One(maxpool_channel) mlp_2_max = Dense_Two(mlp_1_max) mlp_2_max = KL.Reshape(target_shape=(1, 1, int(channel)))(mlp_2_max) # avg path mlp_1_avg = Dense_One(avgpool_channel) mlp_2_avg = Dense_Two(mlp_1_avg) mlp_2_avg = KL.Reshape(target_shape=(1, 1, int(channel)))(mlp_2_avg) channel_attention_feature = KL.Add()([mlp_2_max, mlp_2_avg]) channel_attention_feature = KL.Activation('sigmoid')(channel_attention_feature) return KL.Multiply()([channel_attention_feature, input_xs]) # SAM def spatial_attention(channel_refined_feature): maxpool_spatial = KL.Lambda(lambda x: K.max(x, axis=3, keepdims=True))(channel_refined_feature) avgpool_spatial = KL.Lambda(lambda x: K.mean(x, axis=3, keepdims=True))(channel_refined_feature) max_avg_pool_spatial = KL.Concatenate(axis=3)([maxpool_spatial, avgpool_spatial]) return KL.Conv2D(filters=1, kernel_size=(3, 3), padding="same", activation='sigmoid', kernel_initializer='he_normal', use_bias=False)(max_avg_pool_spatial) def cbam_module(input_xs, reduction_ratio=0.5): channel_refined_feature = channel_attention(input_xs, reduction_ratio=reduction_ratio) spatial_attention_feature = spatial_attention(channel_refined_feature) refined_feature = KL.Multiply()([channel_refined_feature, spatial_attention_feature]) return KL.Add()([refined_feature, input_xs])
张量尺寸不变,但特征图的每个点的权值会被注意力模块调整,被训练后的注意力模块会对关注度高的范围的点提升权值。该测试只用于检查尺寸是否设计准确,具体效果需要将模块嵌入CNN中训练测试。
# 使用numpy模拟一个真实图片的尺寸
input_xs = np.ones([2, 256, 256, 3], dtype='float32') * 0.5
# numpy转Tensor
input_xs = tf.convert_to_tensor(input_xs)
print(input_xs.shape) # output: (2, 256, 256, 3)
outputs = cbam_module(input_xs)
print(outputs.shape) # output: (2, 256, 256, 3)
全局池化就是将所有通道的矩阵根据对应池化操作变成一个1x1的张量。假如输入尺寸格式[batch_size, height, weight, channel],输入为[3, 4, 4, 3], 那么经过全局池化后的输出是[3, 1, 1, 3]。
一个刚入门Keras很关键的点,需要模型的每一层都是从keras.layers中引用的类。例如想要实现concat操作需要使用keras.layers.Concatenate而不是keras.backend.concatenate。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。