赞
踩
提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档
提示:这里可以添加本文要记录的大概内容:
随着嵌入式的发展需要把一些模型放到板子上
提示:以下是本篇文章正文内容,下面案例可供参考
主要修改了检测层和检测框
# parameters nc: 2 # number of classes depth_multiple: 1.0 # model depth multiple width_multiple: 1.0 # layer channel multiple # anchors anchors: - [5,6, 8,14, 15,11] #4 - [10,13, 16,30, 33,23] # P3/8 - [30,61, 62,45, 59,119] # P4/16 - [116,90, 156,198, 373,326] # P5/32 # YOLOv5 backbone backbone: # [from, number, module, args] [[-1, 1, Focus, [64, 3]], # 0-P1/2 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4 [-1, 3, BottleneckCSP, [128]], #160*160 [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 [-1, 9, BottleneckCSP, [256]], #80*80 [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 [-1, 9, BottleneckCSP, [512]], #40*40 [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 [-1, 1, SPP, [1024, [5, 9, 13]]], [-1, 3, BottleneckCSP, [1024, False]], # 9 20*20 ] # YOLOv5 head head: [[-1, 1, Conv, [512, 1, 1]], #20*20 [-1, 1, nn.Upsample, [None, 2, 'nearest']], #40*40 [[-1, 6], 1, Concat, [1]], # cat backbone P4 40*40 [-1, 3, BottleneckCSP, [512, False]], # 13 40*40 [-1, 1, Conv, [512, 1, 1]], #40*40 [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 4], 1, Concat, [1]], # cat backbone P3 80*80 [-1, 3, BottleneckCSP, [512, False]], # 17 (P3/8-small) 80*80 [-1, 1, Conv, [256, 1, 1]], #18 80*80 [-1, 1, nn.Upsample, [None, 2, 'nearest']], #19 160*160 [[-1, 2], 1, Concat, [1]], #20 cat backbone p2 160*160 [-1, 3, BottleneckCSP, [256, False]], #21 160*160 [-1, 1, Conv, [256, 3, 2]], #22 80*80 [[-1, 18], 1, Concat, [1]], #23 80*80 [-1, 3, BottleneckCSP, [256, False]], #24 80*80 [-1, 1, Conv, [256, 3, 2]], #25 40*40 [[-1, 14], 1, Concat, [1]], # 26 cat head P4 40*40 [-1, 3, BottleneckCSP, [512, False]], # 27 (P4/16-medium) 40*40 [-1, 1, Conv, [512, 3, 2]], #28 20*20 [[-1, 10], 1, Concat, [1]], #29 cat head P5 #20*20 [-1, 3, BottleneckCSP, [1024, False]], # 30 (P5/32-large) 20*20 [[21, 24, 27, 30], 1, Detect, [nc, anchors]], # Detect(p2, P3, P4, P5) ]
训练最好采用yolov5的L的权重模型,如果用NVIDA的GPU,需要先看机子算力和能装的Cuda版本和cudnn版本
1.将pt模型使用yolov5工程中的export.py转换为onnx模型;
python export.py --weights best.pt --img 640 --batch 1 --include onnx
还要把算子改成12
2.将onnx模型使用rknn-toolkit2中onnx文件夹的test.py转换为rknn模型;
需要best.onnx转换为best.rknn(平台:x86机器Linux系统)
3.在板子上使用rknpu2工具调用rknn模型,实现NPU推理加速。
在rknn的板子上进行,需要修改
const int anchor0[6] = {10, 13, 16, 30, 33, 23};
const int anchor1[6] = {30, 61, 62, 45, 59, 119};
const int anchor2[6] = {116, 90, 156, 198, 373, 326};
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。