赞
踩
YOLOv10
出来有几天时间了,这次我没有选择第一时间出文章解析,如此频繁的发布数字版本的 YOLO
着实让人头疼,虽然数字的更新并非旧版技术的过时, 但是这肯定会让很多在校同学增加很多焦虑情绪。这里还是请大家辩证看待。
v10
这次的改动不是很多,甚至很多新手同学都觉得没改动什么。网络结构上改动确实不多,主要贡献还是 NMS free
,这个点并不体现在模型的 yaml
文件里,所以只看 yaml
也看不出什么,从效果角度讲 v10
就是比 v8
强,这是没法狡辩的。
值得注意的是 v10
用的依然是 yolov8
的框架,也就是说到目前为止,YOLOv5/v7/v9
都是一个框架,YOLOv8/v10
是一个框架,并且这两套框架都是 ultralytics
团队在维护的,这也验证了前几年(22年)我写的一段文字,一个好的基线框架是至关重要的,一个超级活跃的开源项目也是可遇不可求的,它带给你的帮助是无法估量的。
那这意味什么呢,你 v5/v7/v9
会玩一个就代表三个全会,v8/v10
会玩一个就代表两个全会;并且我们可以无脑将 v10
的改进点放到 v8
里,所以大家不必纠结 v8
旧于 v10
,以后的任何数字版本 yolo
,对你们来说不过是一个改进点而已。
如何选择一个合适的基线?
- 从硬件性能角度考虑
即考虑自己的硬件条件,本身是否具备训练一个大模型的硬件环境,比如显卡性能或者显存很小,那么就无法训练参数量过大的模型。- 从训练成本角度考虑
即考虑自身的资金成本或者时间成本,如果实验室没有GPU,那么就要使用云平台,通常实验都会有几十次,所以资金成本很大,这时可以考虑使用参数量较小的基线,这样训练时间快,对硬件的要求也没有那么高。- 从评价指标角度考虑
我们通常在发论文时都会加上一个应用场景,加场景的作用就是满足实际的使用,我们知道理想情况下,模型的精度和参数量是成正比的,假如满足实际使用要求的mAP值是0.9,YOLOv5s的mAP只有0.6,那么无论你怎样优化,模型也很难达到要求,所以这时候就要考虑使用一个更大的基线,比如5L\X等。- 从代码开源角度考虑
现在开源工作做得很好,有时候我们自己忙了一个月编写的代码甚至不如GitHub上一个开源的项目,所以在选择优化算法时要考虑自身的代码水平,即这个算法的有没有开源的源代码,这个代码是否清清晰易读,是否方便改进。
为什么我总是推荐大家用 yolov8
呢,因为这个框架真的真的很好用,对小白也很友好,不管是论文还是工作,这个框架我认为真的值得学习。
目前这个框架支持 分类、检测、分割、关键点、开集目标检测、未来会加入深度估计等任务,学会了对自己帮助很大。
下面简单回顾下论文,并且对 v10
的三个模块加到 v8
做个简单教程。
下面快速回顾下原始论文,这是我借助 Ai Drive
总结的,基本就是这些东西,如果对模块效果感兴趣请大家直接看论文的消融实验。
在过去的几年中,YOLO系列(You Only Look Once)模型凭借其在计算成本和检测性能之间的卓越平衡,已经成为实时目标检测领域的主流。近日,由清华大学团队提出的最新论文《YOLOv10: Real-Time End-to-End Object Detection》进一步推进了YOLO模型的性能与效率边界。本文将详细介绍这篇论文的创新点、主要贡献及其实验结果。
YOLO模型的检测管道主要包括模型前向过程和非极大值抑制(NMS)后处理。然而,NMS的依赖使得YOLO模型在端到端部署中存在一定的效率瓶颈。此外,YOLO模型各组件的设计也缺乏全面细致的优化,导致计算冗余明显,限制了模型的性能提升。
这篇论文针对以上问题,提出了两个关键创新点:一致的双重分配策略和整体效率-准确性驱动的模型设计策略。
一致的双重分配策略:
整体效率-准确性驱动的模型设计策略:
大核卷积和部分自注意力模块(PSA):
论文通过在COCO数据集上的大量实验,验证了YOLOv10在各个模型规模上的优越性能和效率:
如果将 v10
全部的改动合并到 v8
的主分支,大概到 14
个 python
文件和 6
个 yaml
文件,详细的代码大家直接看这个 PR
就行了,https://github.com/ultralytics/ultralytics/pull/13113/files,
由于我对大家比较了解,就不给大家展示怎么改这 14
个文件了,用 Ultralytics
框架的同学等待官方合并就好了。咱们直接改除 NMS free
外的其他贡献点,有三个模块,大家自行拆解使用 :
ultralytics/ultralytics/nn/modules/block.py
添加如下代码:
from ultralytics.utils.torch_utils import fuse_conv_and_bn class RepVGGDW(torch.nn.Module): """RepVGGDW is a class that represents a depth wise separable convolutional block in RepVGG architecture.""" def __init__(self, ed) -> None: super().__init__() self.conv = Conv(ed, ed, 7, 1, 3, g=ed, act=False) self.conv1 = Conv(ed, ed, 3, 1, 1, g=ed, act=False) self.dim = ed self.act = nn.SiLU() def forward(self, x): """ Performs a forward pass of the RepVGGDW block. Args: x (torch.Tensor): Input tensor. Returns: (torch.Tensor): Output tensor after applying the depth wise separable convolution. """ return self.act(self.conv(x) + self.conv1(x)) def forward_fuse(self, x): """ Performs a forward pass of the RepVGGDW block without fusing the convolutions. Args: x (torch.Tensor): Input tensor. Returns: (torch.Tensor): Output tensor after applying the depth wise separable convolution. """ return self.act(self.conv(x)) @torch.no_grad() def fuse(self): """ Fuses the convolutional layers in the RepVGGDW block. This method fuses the convolutional layers and updates the weights and biases accordingly. """ conv = fuse_conv_and_bn(self.conv.conv, self.conv.bn) conv1 = fuse_conv_and_bn(self.conv1.conv, self.conv1.bn) conv_w = conv.weight conv_b = conv.bias conv1_w = conv1.weight conv1_b = conv1.bias conv1_w = torch.nn.functional.pad(conv1_w, [2, 2, 2, 2]) final_conv_w = conv_w + conv1_w final_conv_b = conv_b + conv1_b conv.weight.data.copy_(final_conv_w) conv.bias.data.copy_(final_conv_b) self.conv = conv del self.conv1 class CIB(nn.Module): """ Conditional Identity Block (CIB) module. Args: c1 (int): Number of input channels. c2 (int): Number of output channels. shortcut (bool, optional): Whether to add a shortcut connection. Defaults to True. e (float, optional): Scaling factor for the hidden channels. Defaults to 0.5. lk (bool, optional): Whether to use RepVGGDW for the third convolutional layer. Defaults to False. """ def __init__(self, c1, c2, shortcut=True, e=0.5, lk=False): super().__init__() c_ = int(c2 * e) # hidden channels self.cv1 = nn.Sequential( Conv(c1, c1, 3, g=c1), Conv(c1, 2 * c_, 1), Conv(2 * c_, 2 * c_, 3, g=2 * c_) if not lk else RepVGGDW(2 * c_), Conv(2 * c_, c2, 1), Conv(c2, c2, 3, g=c2), ) self.add = shortcut and c1 == c2 def forward(self, x): """ Forward pass of the CIB module. Args: x (torch.Tensor): Input tensor. Returns: (torch.Tensor): Output tensor. """ return x + self.cv1(x) if self.add else self.cv1(x) class C2fCIB(C2f): """ C2fCIB class represents a convolutional block with C2f and CIB modules. Args: c1 (int): Number of input channels. c2 (int): Number of output channels. n (int, optional): Number of CIB modules to stack. Defaults to 1. shortcut (bool, optional): Whether to use shortcut connection. Defaults to False. lk (bool, optional): Whether to use local key connection. Defaults to False. g (int, optional): Number of groups for grouped convolution. Defaults to 1. e (float, optional): Expansion ratio for CIB modules. Defaults to 0.5. """ def __init__(self, c1, c2, n=1, shortcut=False, lk=False, g=1, e=0.5): super().__init__(c1, c2, n, shortcut, g, e) self.m = nn.ModuleList(CIB(self.c, self.c, shortcut, e=1.0, lk=lk) for _ in range(n)) class Attention(nn.Module): """ Attention module that performs self-attention on the input tensor. Args: dim (int): The input tensor dimension. num_heads (int): The number of attention heads. attn_ratio (float): The ratio of the attention key dimension to the head dimension. Attributes: num_heads (int): The number of attention heads. head_dim (int): The dimension of each attention head. key_dim (int): The dimension of the attention key. scale (float): The scaling factor for the attention scores. qkv (Conv): Convolutional layer for computing the query, key, and value. proj (Conv): Convolutional layer for projecting the attended values. pe (Conv): Convolutional layer for positional encoding. """ def __init__(self, dim, num_heads=8, attn_ratio=0.5): super().__init__() self.num_heads = num_heads self.head_dim = dim // num_heads self.key_dim = int(self.head_dim * attn_ratio) self.scale = self.key_dim**-0.5 nh_kd = nh_kd = self.key_dim * num_heads h = dim + nh_kd * 2 self.qkv = Conv(dim, h, 1, act=False) self.proj = Conv(dim, dim, 1, act=False) self.pe = Conv(dim, dim, 3, 1, g=dim, act=False) def forward(self, x): """ Forward pass of the Attention module. Args: x (torch.Tensor): The input tensor. Returns: (torch.Tensor): The output tensor after self-attention. """ B, _, H, W = x.shape N = H * W qkv = self.qkv(x) q, k, v = qkv.view(B, self.num_heads, -1, N).split([self.key_dim, self.key_dim, self.head_dim], dim=2) attn = (q.transpose(-2, -1) @ k) * self.scale attn = attn.softmax(dim=-1) x = (v @ attn.transpose(-2, -1)).view(B, -1, H, W) + self.pe(v.reshape(B, -1, H, W)) x = self.proj(x) return x class PSA(nn.Module): """ Position-wise Spatial Attention module. Args: c1 (int): Number of input channels. c2 (int): Number of output channels. e (float): Expansion factor for the intermediate channels. Default is 0.5. Attributes: c (int): Number of intermediate channels. cv1 (Conv): 1x1 convolution layer to reduce the number of input channels to 2*c. cv2 (Conv): 1x1 convolution layer to reduce the number of output channels to c. attn (Attention): Attention module for spatial attention. ffn (nn.Sequential): Feed-forward network module. """ def __init__(self, c1, c2, e=0.5): super().__init__() assert c1 == c2 self.c = int(c1 * e) self.cv1 = Conv(c1, 2 * self.c, 1, 1) self.cv2 = Conv(2 * self.c, c1, 1) self.attn = Attention(self.c, attn_ratio=0.5, num_heads=self.c // 64) self.ffn = nn.Sequential(Conv(self.c, self.c * 2, 1), Conv(self.c * 2, self.c, 1, act=False)) def forward(self, x): """ Forward pass of the PSA module. Args: x (torch.Tensor): Input tensor. Returns: (torch.Tensor): Output tensor. """ a, b = self.cv1(x).split((self.c, self.c), dim=1) b = b + self.attn(b) b = b + self.ffn(b) return self.cv2(torch.cat((a, b), 1)) class SCDown(nn.Module): def __init__(self, c1, c2, k, s): """ Spatial Channel Downsample (SCDown) module. Args: c1 (int): Number of input channels. c2 (int): Number of output channels. k (int): Kernel size for the convolutional layer. s (int): Stride for the convolutional layer. """ super().__init__() self.cv1 = Conv(c1, c2, 1, 1) self.cv2 = Conv(c2, c2, k=k, s=s, g=c2, act=False) def forward(self, x): """ Forward pass of the SCDown module. Args: x (torch.Tensor): Input tensor. Returns: (torch.Tensor): Output tensor after applying the SCDown module. """ return self.cv2(self.cv1(x))
ultralytics/ultralytics/nn/modules/block.py
添加如下代码:
"RepVGGDW",
"CIB",
"C2fCIB",
"Attention",
"PSA",
"SCDown",
ultralytics/ultralytics/nn/tasks.py
导包, 添加模块
RepVGGDW,
CIB,
C2fCIB,
Attention,
PSA,
SCDown,
if isinstance(m, RepVGGDW):
m.fuse()
m.forward = m.forward_fuse
PSA, SCDown, C2fCIB
C2fCIB
更换 yaml
,开始训练,注意不同尺寸不是简单的调整深度宽度!
yolov10n.yaml
# Ultralytics YOLO 声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小丑西瓜9/article/detail/661684
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。