赞
踩
参考这篇文章改进YOLO系列:改进YOLOv8,教你YOLOv8如何添加20多种注意力机制,并实验不同位置。_yolo注意力机制_一休哥※的博客-CSDN博客
我使用的是yolov8最新的项目文件,参考上面这篇文章的第二种添加方法
打开yolov8项目文件,找到文件conv.py,正常在该路径下: ultralytics/nn/modules/conv.py
在最后面添加注意力机制的代码,我是从上面文章中复制过来的
打开ultralytics/nn/modules/__init__.py,修改 from .conv import 和 __all__ 括号中添加 GAM_Attention
- from .conv import (CBAM, ChannelAttention, Concat, Conv, Conv2, ConvTranspose, DWConv, DWConvTranspose2d, Focus,
- GhostConv, LightConv, RepConv, SpatialAttention, GAM_Attention)
- __all__ = ('Conv', 'Conv2', 'LightConv', 'RepConv', 'DWConv', 'DWConvTranspose2d', 'ConvTranspose', 'Focus',
- 'GhostConv', 'ChannelAttention', 'SpatialAttention', 'CBAM', 'Concat', 'TransformerLayer',
- 'TransformerBlock', 'MLPBlock', 'LayerNorm2d', 'DFL', 'HGBlock', 'HGStem', 'SPP', 'SPPF', 'C1', 'C2', 'C3',
- 'C2f', 'C3x', 'C3TR', 'C3Ghost', 'GhostBottleneck', 'Bottleneck', 'BottleneckCSP', 'Proto', 'Detect',
- 'Segment', 'Pose', 'Classify', 'TransformerEncoderLayer', 'RepC3', 'RTDETRDecoder', 'AIFI',
- 'DeformableTransformerDecoder', 'DeformableTransformerDecoderLayer', 'MSDeformAttn', 'MLP', 'GAM_Attention')
打开ultralytics/nn/tasks.py,在from ultralytics.nn.modules import括号后添加GAM_Attention
- from ultralytics.nn.modules import (AIFI, C1, C2, C3, C3TR, SPP, SPPF, Bottleneck, BottleneckCSP, C2f, C3Ghost, C3x,
- Classify, Concat, Conv, Conv2, ConvTranspose, Detect, DWConv, DWConvTranspose2d,
- Focus, GhostBottleneck, GhostConv, HGBlock, HGStem, Pose, RepC3, RepConv,
- RTDETRDecoder, Segment, GAM_Attention)
继续在其中添加代码,如图是我添加代码的位置,我不懂为什么要加在这里,先试试
复制他的yaml代码并重命名为 yolov8m-Backbone-ATT.yaml,路径为ultralytics/cfg/models/v8/yolov8m-Backbone-ATT.yaml ,修改nc
- # Ultralytics YOLO 声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】推荐阅读
相关标签
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。