当前位置:   article > 正文

YOLOv8部署在地平线旭日x3_yolov8在地平线j5上部署

yolov8在地平线j5上部署

目录

1、导出ONNX模型

2、在开发板中转为bin模型

(1)配置地平线提供的docker开发环境

(2)启动docker容器

(3)onnx模型检查

(4)处理图片

(5)转为.bin模型

3、验证模型


1、导出ONNX模型

  1. from ultralytics import YOLO
  2. # Load a model
  3. model = YOLO(r'G:\Yolov8\yolov8-detect-pt\yolov8n.pt') # load an official model
  4. # model = YOLO('path/to/best.pt') # load a custom trained
  5. # Export the model
  6. # model.export(format='onnx')
  7. model.export(format='onnx')

运行得到onnx

验证导出的onnx是否可用。运行正常的话,保存的图片有检测出的物体。

  1. from ultralytics import YOLO
  2. import glob
  3. import os
  4. # Load a model
  5. model = YOLO(r'G:\Yolov8\yolov8-detect-pt\yolov8n.onnx') # load an official model
  6. # Predict with the model
  7. imgpath = r'G:\Yolov8\ultralytics-main-detect\imgs'
  8. imgs = glob.glob(os.path.join(imgpath,'*.jpg'))
  9. for img in imgs:
  10. model.predict(img, save=True)

2、在开发板中转为bin模型

(1)配置地平线提供的docker开发环境

本文在windows系统下配置的Docker环境。大家可自行配置。

(2)启动docker容器

如果你的电脑中以前没用过docker,需要先安装docker。

(3)onnx模型检查

进入到horizon_xj3_open_explorer_v1.8.5_20211224\ddk\samples\ai_toolchain\horizon_model_convert_sample\04_detection路径下,创建一个mapper文件夹(个人文件管理,可忽略),再创建一个08_yolov8文件夹(可自行命名), 在路径下创建onnx_model文件夹,将onnx放入

把在第一步得到的yolov8n.onnx模型,放到指定位置:

然后在mapper文件夹下,新建一个01_check.sh文件,内容如下:

  1. #!/usr/bin/env sh
  2. set -e -v
  3. cd $(dirname $0) || exit
  4. # 模型类型,本文以onnx为例
  5. model_type="onnx"
  6. # 要检查的onnx模型位置
  7. onnx_model="./onnx_model/yolov8n.onnx"
  8. # 检查输出日志,放到哪里去
  9. # 虽然它还是放到了与01_check.sh同级目录下(感觉像小bug)
  10. output="./model_output/yolov8_checker.log"
  11. # 用的什么架构,不用改
  12. march="bernoulli2"
  13. hb_mapper checker --model-type ${model_type} \
  14. --model ${onnx_model} \
  15. --output ${output} --march ${march}

cd到mapper文件夹下,执行命令:

sh 01_check.sh

(4)处理图片

挑选一部分图片(20-100)放在mapper文件夹下

在mapper文件夹下,新建一个02_preprocess.sh文件,内容如下:

  1. #!/usr/bin/env bash
  2. # Copyright (c) 2020 Horizon Robotics.All Rights Reserved.
  3. #
  4. # The material in this file is confidential and contains trade secrets
  5. # of Horizon Robotics Inc. This is proprietary information owned by
  6. # Horizon Robotics Inc. No part of this work may be disclosed,
  7. # reproduced, copied, transmitted, or used in any way for any purpose,
  8. # without the express written permission of Horizon Robotics Inc.
  9. set -e -v
  10. cd $(dirname $0) || exit
  11. python3 ../../../data_preprocess.py \
  12. --src_dir data/pcd \
  13. --dst_dir ./pcd_rgb_f32 \
  14. --pic_ext .rgb \
  15. --read_mode opencv

data_preprocess.py在下图位置(官方自带)

在mapper文件夹下,新建一个preprocess.py文件,内容如下:

target_size=(640, 640) -> 调整输出的size

  1. # Copyright (c) 2021 Horizon Robotics.All Rights Reserved.
  2. #
  3. # The material in this file is confidential and contains trade secrets
  4. # of Horizon Robotics Inc. This is proprietary information owned by
  5. # Horizon Robotics Inc. No part of this work may be disclosed,
  6. # reproduced, copied, transmitted, or used in any way for any purpose,
  7. # without the express written permission of Horizon Robotics Inc.
  8. import sys
  9. sys.path.append("../../../01_common/python/data/")
  10. from transformer import *
  11. from dataloader import *
  12. def calibration_transformers():
  13. transformers = [
  14. PadResizeTransformer(target_size=(640, 640)),
  15. HWC2CHWTransformer(),
  16. BGR2RGBTransformer(data_format="CHW"),
  17. ]
  18. return transformers
  19. def infer_transformers(input_shape, input_layout="NHWC"):
  20. transformers = [
  21. PadResizeTransformer(target_size=input_shape),
  22. BGR2RGBTransformer(data_format="HWC"),
  23. RGB2NV12Transformer(data_format="HWC"),
  24. NV12ToYUV444Transformer(target_size=input_shape,
  25. yuv444_output_layout=input_layout[1:]),
  26. ]
  27. return transformers
  28. def infer_image_preprocess(image_file, input_layout, input_shape):
  29. transformers = infer_transformers(input_shape, input_layout)
  30. origin_image, processed_image = SingleImageDataLoaderWithOrigin(
  31. transformers, image_file, imread_mode="opencv")
  32. return origin_image, processed_image
  33. def eval_image_preprocess(image_path, annotation_path, input_shape,
  34. input_layout):
  35. transformers = infer_transformers(input_shape, input_layout)
  36. data_loader = COCODataLoader(transformers,
  37. image_path,
  38. annotation_path,
  39. imread_mode='opencv')
  40. return data_loader

运行

sh 02_preprocess.sh

(5)转为.bin模型

准备好处理后的图像数据,下面就是获取能够在开发板上运行的模型了,地平线在开发板上运行的模型后缀为.bin,故在此称为.bin模型。

这一步需要准备两个文件,一个是03_build.sh,一个是yolov8_config.yaml,两个文件均放于mapper文件夹下。

注意:运行03_build.sh之前要保证前一步生成的rgb大小要和yolov8_config.yaml文件设置的输入尺寸一致(模型输入)。

03_build.sh

  1. #!/bin/bash
  2. set -e -v
  3. cd $(dirname $0)
  4. config_file="./yolov8_config.yaml"
  5. model_type="onnx"
  6. # build model
  7. hb_mapper makertbin --config ${config_file} \
  8. --model-type ${model_type}

yolov8_config.yaml

  1. # Copyright (c) 2020 Horizon Robotics.All Rights Reserved.
  2. #
  3. # The material in this file is confidential and contains trade secrets
  4. # of Horizon Robotics Inc. This is proprietary information owned by
  5. # Horizon Robotics Inc. No part of this work may be disclosed,
  6. # reproduced, copied, transmitted, or used in any way for any purpose,
  7. # without the express written permission of Horizon Robotics Inc.
  8. # 模型转化相关的参数
  9. # ------------------------------------
  10. # model conversion related parameters
  11. model_parameters:
  12. # Onnx浮点网络数据模型文件
  13. # -----------------------------------------------------------
  14. # the model file of floating-point ONNX neural network data
  15. onnx_model: 'onnx_model/yolov8n.onnx'
  16. # 适用BPU架构
  17. # --------------------------------
  18. # the applicable BPU architecture
  19. march: "bernoulli2"
  20. # 指定模型转换过程中是否输出各层的中间结果,如果为True,则输出所有层的中间输出结果,
  21. # --------------------------------------------------------------------------------------
  22. # specifies whether or not to dump the intermediate results of all layers in conversion
  23. # if set to True, then the intermediate results of all layers shall be dumped
  24. layer_out_dump: False
  25. # 日志文件的输出控制参数,
  26. # debug输出模型转换的详细信息
  27. # info只输出关键信息
  28. # warn输出警告和错误级别以上的信息
  29. # ----------------------------------------------------------------------------------------
  30. # output control parameter of log file(s),
  31. # if set to 'debug', then details of model conversion will be dumped
  32. # if set to 'info', then only important imformation will be dumped
  33. # if set to 'warn', then information ranked higher than 'warn' and 'error' will be dumped
  34. log_level: 'debug'
  35. # 模型转换输出的结果的存放目录
  36. # -----------------------------------------------------------
  37. # the directory in which model conversion results are stored
  38. working_dir: 'model_output'
  39. # 模型转换输出的用于上板执行的模型文件的名称前缀
  40. # -----------------------------------------------------------------------------------------
  41. # model conversion generated name prefix of those model files used for dev board execution
  42. output_model_file_prefix: 'yolov8n_rgb'
  43. # 模型输入相关参数, 若输入多个节点, 则应使用';'进行分隔, 使用默认缺省设置则写None
  44. # --------------------------------------------------------------------------
  45. # model input related parameters,
  46. # please use ";" to seperate when inputting multiple nodes,
  47. # please use None for default setting
  48. input_parameters:
  49. # (选填) 模型输入的节点名称, 此名称应与模型文件中的名称一致, 否则会报错, 不填则会使用模型文件中的节点名称
  50. # --------------------------------------------------------------------------------------------------------
  51. # (Optional) node name of model input,
  52. # it shall be the same as the name of model file, otherwise an error will be reported,
  53. # the node name of model file will be used when left blank
  54. input_name: ""
  55. # 网络实际执行时,输入给网络的数据格式,包括 nv12/rgb/bgr/yuv444/gray/featuremap,
  56. # ------------------------------------------------------------------------------------------
  57. # the data formats to be passed into neural network when actually performing neural network
  58. # available options: nv12/rgb/bgr/yuv444/gray/featuremap,
  59. input_type_rt: 'nv12'
  60. # 网络实际执行时输入的数据排布, 可选值为 NHWC/NCHW
  61. # 若input_type_rt配置为nv12,则此处参数不需要配置
  62. # ------------------------------------------------------------------
  63. # the data layout formats to be passed into neural network when actually performing neural network, available options: NHWC/NCHW
  64. # If input_type_rt is configured as nv12, then this parameter does not need to be configured
  65. #input_layout_rt: ''
  66. # 网络训练时输入的数据格式,可选的值为rgb/bgr/gray/featuremap/yuv444
  67. # --------------------------------------------------------------------
  68. # the data formats in network training
  69. # available options: rgb/bgr/gray/featuremap/yuv444
  70. input_type_train: 'rgb'
  71. # 网络训练时输入的数据排布, 可选值为 NHWC/NCHW
  72. # ------------------------------------------------------------------
  73. # the data layout in network training, available options: NHWC/NCHW
  74. input_layout_train: 'NCHW'
  75. # (选填) 模型网络的输入大小, 以'x'分隔, 不填则会使用模型文件中的网络输入大小,否则会覆盖模型文件中输入大小
  76. # -------------------------------------------------------------------------------------------
  77. # (Optional)the input size of model network, seperated by 'x'
  78. # note that the network input size of model file will be used if left blank
  79. # otherwise it will overwrite the input size of model file
  80. input_shape: '1x3x640x640'
  81. # 网络实际执行时,输入给网络的batch_size, 默认值为1
  82. # ---------------------------------------------------------------------
  83. # the data batch_size to be passed into neural network when actually performing neural network, default value: 1
  84. #input_batch: 1
  85. # 网络输入的预处理方法,主要有以下几种:
  86. # no_preprocess 不做任何操作
  87. # data_mean 减去通道均值mean_value
  88. # data_scale 对图像像素乘以data_scale系数
  89. # data_mean_and_scale 减去通道均值后再乘以scale系数
  90. # -------------------------------------------------------------------------------------------
  91. # preprocessing methods of network input, available options:
  92. # 'no_preprocess' indicates that no preprocess will be made
  93. # 'data_mean' indicates that to minus the channel mean, i.e. mean_value
  94. # 'data_scale' indicates that image pixels to multiply data_scale ratio
  95. # 'data_mean_and_scale' indicates that to multiply scale ratio after channel mean is minused
  96. norm_type: 'data_scale'
  97. # 图像减去的均值, 如果是通道均值,value之间必须用空格分隔
  98. # --------------------------------------------------------------------------
  99. # the mean value minused by image
  100. # note that values must be seperated by space if channel mean value is used
  101. mean_value: ''
  102. # 图像预处理缩放比例,如果是通道缩放比例,value之间必须用空格分隔
  103. # ---------------------------------------------------------------------------
  104. # scale value of image preprocess
  105. # note that values must be seperated by space if channel scale value is used
  106. scale_value: 0.003921568627451
  107. # 模型量化相关参数
  108. # -----------------------------
  109. # model calibration parameters
  110. calibration_parameters:
  111. # 模型量化的参考图像的存放目录,图片格式支持Jpeg、Bmp等格式,输入的图片
  112. # 应该是使用的典型场景,一般是从测试集中选择20~100张图片,另外输入
  113. # 的图片要覆盖典型场景,不要是偏僻场景,如过曝光、饱和、模糊、纯黑、纯白等图片
  114. # 若有多个输入节点, 则应使用';'进行分隔
  115. # -------------------------------------------------------------------------------------------------
  116. # the directory where reference images of model quantization are stored
  117. # image formats include JPEG, BMP etc.
  118. # should be classic application scenarios, usually 20~100 images are picked out from test datasets
  119. # in addition, note that input images should cover typical scenarios
  120. # and try to avoid those overexposed, oversaturated, vague,
  121. # pure blank or pure white images
  122. # use ';' to seperate when there are multiple input nodes
  123. cal_data_dir: './pcd_rgb_f32'
  124. # 如果输入的图片文件尺寸和模型训练的尺寸不一致时,并且preprocess_on为true,
  125. # 则将采用默认预处理方法(skimage resize),
  126. # 将输入图片缩放或者裁减到指定尺寸,否则,需要用户提前把图片处理为训练时的尺寸
  127. # ---------------------------------------------------------------------------------
  128. # In case the size of input image file is different from that of in model training
  129. # and that preprocess_on is set to True,
  130. # shall the default preprocess method(skimage resize) be used
  131. # i.e., to resize or crop input image into specified size
  132. # otherwise user must keep image size as that of in training in advance
  133. preprocess_on: False
  134. # 模型量化的算法类型,支持kl、max,通常采用KL即可满足要求, 若为QAT导出的模型, 则应选择load
  135. # ----------------------------------------------------------------------------------
  136. # types of model quantization algorithms, usually kl will meet the need
  137. # available options:kl and max
  138. # if converted model is quanti model exported from QAT , then choose `load`
  139. calibration_type: 'default'
  140. # 编译器相关参数
  141. # ----------------------------
  142. # compiler related parameters
  143. compiler_parameters:
  144. # 编译策略,支持bandwidth和latency两种优化模式;
  145. # bandwidth以优化ddr的访问带宽为目标;
  146. # latency以优化推理时间为目标
  147. # -------------------------------------------------------------------------------------------
  148. # compilation strategy, there are 2 available optimization modes: 'bandwidth' and 'lantency'
  149. # the 'bandwidth' mode aims to optimize ddr access bandwidth
  150. # while the 'lantency' mode aims to optimize inference duration
  151. compile_mode: 'latency'
  152. # 设置debug为True将打开编译器的debug模式,能够输出性能仿真的相关信息,如帧率、DDR带宽占用等
  153. # -----------------------------------------------------------------------------------
  154. # the compiler's debug mode will be enabled by setting to True
  155. # this will dump performance simulation related information
  156. # such as: frame rate, DDR bandwidth usage etc.
  157. debug: False
  158. # 编译模型指定核数,不指定默认编译单核模型, 若编译双核模型,将下边注释打开即可
  159. # -------------------------------------------------------------------------------------
  160. # specifies number of cores to be used in model compilation
  161. # as default, single core is used as this value left blank
  162. # please delete the "# " below to enable dual-core mode when compiling dual-core model
  163. # core_num: 2
  164. # 优化等级可选范围为O0~O3
  165. # O0不做任何优化, 编译速度最快,优化程度最低,
  166. # O1-O3随着优化等级提高,预期编译后的模型的执行速度会更快,但是所需编译时间也会变长。
  167. # 推荐用O2做最快验证
  168. # ----------------------------------------------------------------------------------------------------------
  169. # optimization level ranges between O0~O3
  170. # O0 indicates that no optimization will be made
  171. # the faster the compilation, the lower optimization level will be
  172. # O1-O3: as optimization levels increase gradually, model execution, after compilation, shall become faster
  173. # while compilation will be prolonged
  174. # it is recommended to use O2 for fastest verification
  175. optimize_level: 'O3'

运行

sh 03_build.sh

转换成功后,得到几个文件:

3、验证模型

得到模型后,可自行编写后处理代码在地平线板子验证模型:

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/寸_铁/article/detail/864057
推荐阅读
相关标签
  

闽ICP备14008679号