赞
踩
以下是对一些注意力机制的一般性优缺点的概述:
BAM (Bottleneck Attention Module):
CBAM (Convolutional Block Attention Module):
SE (Squeeze-and-Excitation):
CoTAttention (Context Transformer Attention):
MobileViTAttention:
SimAM (Similarity Attention Module):
SK (Selective Kernel):
ShuffleAttention:
S2Attention:
TripletAttention:
ECA (Efficient Channel Attention):
ParNetAttention:
CoordAttention:
MHSA (Multi-Head Self-Attention):
SGE (Spatial Group-wise Enhance):
A2Attention:
GC (Global Context Attention):
EffectiveSE (Effective Squeeze-Excitation):
GE (Gather-Excite Attention):
CrissCrossAttention:
Polarized Self-Attention:
Sequential Self-Attention:
GAM (Global Attention Module):
Biformer:
EMA (Evolutionary Multi-Agent):
CloAttention (Closely Integrated Self-Attention):
LSKBlock:
请注意,上述的优缺点是一般性的观点,具体性能可能因任务和数据集而异。在选择注意力机制时,建议根据具体任务需求和计算资源的可用性进行仔细评估。
2023.12.19
EMA(指数移动平均):
SimAM(相似性感知注意力模块):
SpatialGroupEnhance(空间组增强):
BiLevelRoutingAttention(双级路由注意力):
BiLevelRoutingAttention_nchw(基于通道维度的双级路由注意力):
TripletAttention(三元注意力):
CoordAtt(坐标注意力):
BAMBlock(BAM块):
EfficientAttention(高效注意力):
LSKBlock(大分离卷积核注意力块):
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。