当前位置:   article > 正文

Yolov5 转换成 RKNN模型_yolov5转rknn

yolov5转rknn

1.前言

关于模型转换后在NPU上运行,看了很多的教程,但是对于不熟悉模型转换的小白,在转换的几个关键点还是要注意的,所以本次的教程从最基本的开始做!!!

2.环境准备

(1) RKNN的环境配置好(推荐使用1.7.1版本,环境都没有配置好,那可能要努力一下了,都有教程的)

(2)下载yolov5代码yolov5官方链接,相关的模型下载如图所示,本次教程下载了5s/5m/5x的模型

          python环境的配置,推荐使用torch==1.8、torchvision==0.9.1、其他的环境

3.RKNN模型转换

     

 3.1 将yolov5s.pt、yolov5m.pt、yolov5x.pt拷贝到下yolov5代码目录

  (1)执行   python3 export.py --weights yolov5s.pt --img 640 --batch 1 --opset 12
  生成 yolov5s.onnx模型,换成5m或者5x就把名字换一下,相信大家都会哒!

  (2)运行的结果如下图所示:

         好啦,我们已经完成了onnx的模型导出,已经完成50%了!!!

         注:yolov5工程需要使用pytorch 1.8.0 或 1.9.0 版本才能正常导出。

3.2将onnx模型准换成rknn

        来,上代码!!!下面的一些参数大家还是懂的哈,不懂的话,那再去看看rknn的文档(相信你们都到这一步了,都懂的)

ONNX_MODEL = 'yolov5s.onnx'
RKNN_MODEL = 'yolov5s.rknn'
IMG_PATH = './bus.jpg'
DATASET = './dataset.txt'

重点来了,绝对的干货!!!换成5m或者5x后,有几个节点需要修改的

(1)yolov5s导出RKNN的节点

        ret = rknn.load_onnx(model=ONNX_MODEL,outputs=['396', '440', '484'])

(2)yolov5m导出RKNN的节点

        ret = rknn.load_onnx(model=ONNX_MODEL,outputs=['462', '506', '550'])

(3)yolov5x导出RKNN的节点

        ret = rknn.load_onnx(model=ONNX_MODEL,outputs=['696', '740', '784'])

注意: 1. 这3个节点的输出shape分别是[1, 3, 80, 80, 85],[1, 3, 40, 40, 85],[1, 3, 20, 20, 85] 

           2.具体的网络结构可以通过netron查看(哎,我还是截图出来看看吧,谁叫这是保姆教程呢)

          Example: 以yolov5s为例,看到里面的transpose算子了吧,是不是有三个!然后看outputs的name是396,再看其他两个transpose的name分别是440和484,好啦,已经很清楚了,在不知道就枪毙掉!!!

         ONNX转rknn代码(yolov5s),转5m和5x只需要修改ONNX_MODEL、RKNN_MODEL、导出的节点就行了

        还有一个坑,就是在转化的时候保证onnx==1.6.0!!!

  1. import numpy as np
  2. import cv2
  3. from rknn.api import RKNN
  4. import os
  5. ONNX_MODEL = 'yolov5s.onnx'
  6. RKNN_MODEL = 'yolov5s.rknn'
  7. IMG_PATH = './bus.jpg'
  8. DATASET = './dataset.txt'
  9. if __name__ == '__main__':
  10. # Create RKNN object
  11. rknn = RKNN()
  12. if not os.path.exists(ONNX_MODEL):
  13. print('model not exist')
  14. exit(-1)
  15. # pre-process config
  16. print('--> Config model')
  17. rknn.config(reorder_channel='0 1 2',mean_values=[[0, 0, 0]],std_values=[[255, 255, 255]],optimization_level=3,target_platform = 'rk1808',output_optimize=1)
  18. print('done')
  19. # Load ONNX model
  20. print('--> Loading model')
  21. ret = rknn.load_onnx(model=ONNX_MODEL,outputs=['396', '440', '484'])
  22. if ret != 0:
  23. print('Load yolov5 failed!')
  24. exit(ret)
  25. print('done')
  26. # Build model
  27. print('--> Building model')
  28. ret = rknn.build(do_quantization=True, dataset=DATASET)
  29. if ret != 0:
  30. print('Build yolov5 failed!')
  31. exit(ret)
  32. print('done')
  33. # Export RKNN model
  34. print('--> Export RKNN model')
  35. ret = rknn.export_rknn(RKNN_MODEL)
  36. if ret != 0:
  37. print('Export yolov5rknn failed!')
  38. exit(ret)
  39. print('done')

        下图是转好的模型(嘿嘿,还是有亿点点成就的)

 4.rknn模型推理

        准备好图片、模型、推理代码就开始了,是不是很简单(感觉rknn的工具已经很傻瓜化了,就是环境要配置好)

  1. import os
  2. import urllib
  3. import traceback
  4. import time
  5. import sys
  6. import numpy as np
  7. import cv2
  8. from rknn.api import RKNN
  9. CLASSES = ("person", "bicycle", "car","motorbike ","aeroplane ","bus ","train","truck ","boat","traffic light",
  10. "fire hydrant","stop sign ","parking meter","bench","bird","cat","dog ","horse ","sheep","cow","elephant",
  11. "bear","zebra ","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis","snowboard","sports ball","kite",
  12. "baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle","wine glass","cup","fork","knife ",
  13. "spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hot dog","pizza ","donut","cake","chair","sofa",
  14. "pottedplant","bed","diningtable","toilet ","tvmonitor","laptop ","mouse ","remote ","keyboard ","cell phone","microwave ",
  15. "oven ","toaster","sink","refrigerator ","book","clock","vase","scissors ","teddy bear ","hair drier", "toothbrush ")
  16. def sigmoid(x):
  17. return 1 / (1 + np.exp(-x))
  18. def xywh2xyxy(x):
  19. # Convert [x, y, w, h] to [x1, y1, x2, y2]
  20. y = np.copy(x)
  21. y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
  22. y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
  23. y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
  24. y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
  25. return y
  26. def process(input, mask, anchors):
  27. anchors = [anchors[i] for i in mask]
  28. grid_h, grid_w = map(int, input.shape[0:2])
  29. box_confidence = sigmoid(input[..., 4])
  30. box_confidence = np.expand_dims(box_confidence, axis=-1)
  31. box_class_probs = sigmoid(input[..., 5:])
  32. box_xy = sigmoid(input[..., :2])*2 - 0.5
  33. col = np.tile(np.arange(0, grid_w), grid_w).reshape(-1, grid_w)
  34. row = np.tile(np.arange(0, grid_h).reshape(-1, 1), grid_h)
  35. col = col.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
  36. row = row.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
  37. grid = np.concatenate((col, row), axis=-1)
  38. box_xy += grid
  39. box_xy *= int(IMG_SIZE/grid_h)
  40. box_wh = pow(sigmoid(input[..., 2:4])*2, 2)
  41. box_wh = box_wh * anchors
  42. box = np.concatenate((box_xy, box_wh), axis=-1)
  43. return box, box_confidence, box_class_probs
  44. def filter_boxes(boxes, box_confidences, box_class_probs):
  45. """Filter boxes with box threshold. It's a bit different with origin yolov5 post process!
  46. # Arguments
  47. boxes: ndarray, boxes of objects.
  48. box_confidences: ndarray, confidences of objects.
  49. box_class_probs: ndarray, class_probs of objects.
  50. # Returns
  51. boxes: ndarray, filtered boxes.
  52. classes: ndarray, classes for boxes.
  53. scores: ndarray, scores for boxes.
  54. """
  55. box_classes = np.argmax(box_class_probs, axis=-1)
  56. box_class_scores = np.max(box_class_probs, axis=-1)
  57. pos = np.where(box_confidences[...,0] >= BOX_THRESH)
  58. boxes = boxes[pos]
  59. classes = box_classes[pos]
  60. scores = box_class_scores[pos]
  61. return boxes, classes, scores
  62. def nms_boxes(boxes, scores):
  63. """Suppress non-maximal boxes.
  64. # Arguments
  65. boxes: ndarray, boxes of objects.
  66. scores: ndarray, scores of objects.
  67. # Returns
  68. keep: ndarray, index of effective boxes.
  69. """
  70. x = boxes[:, 0]
  71. y = boxes[:, 1]
  72. w = boxes[:, 2] - boxes[:, 0]
  73. h = boxes[:, 3] - boxes[:, 1]
  74. areas = w * h
  75. order = scores.argsort()[::-1]
  76. keep = []
  77. while order.size > 0:
  78. i = order[0]
  79. keep.append(i)
  80. xx1 = np.maximum(x[i], x[order[1:]])
  81. yy1 = np.maximum(y[i], y[order[1:]])
  82. xx2 = np.minimum(x[i] + w[i], x[order[1:]] + w[order[1:]])
  83. yy2 = np.minimum(y[i] + h[i], y[order[1:]] + h[order[1:]])
  84. w1 = np.maximum(0.0, xx2 - xx1 + 0.00001)
  85. h1 = np.maximum(0.0, yy2 - yy1 + 0.00001)
  86. inter = w1 * h1
  87. ovr = inter / (areas[i] + areas[order[1:]] - inter)
  88. inds = np.where(ovr <= NMS_THRESH)[0]
  89. order = order[inds + 1]
  90. keep = np.array(keep)
  91. return keep
  92. def yolov5_post_process(input_data):
  93. masks = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
  94. anchors = [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45],
  95. [59, 119], [116, 90], [156, 198], [373, 326]]
  96. boxes, classes, scores = [], [], []
  97. for input,mask in zip(input_data, masks):
  98. b, c, s = process(input, mask, anchors)
  99. b, c, s = filter_boxes(b, c, s)
  100. boxes.append(b)
  101. classes.append(c)
  102. scores.append(s)
  103. boxes = np.concatenate(boxes)
  104. boxes = xywh2xyxy(boxes)
  105. classes = np.concatenate(classes)
  106. scores = np.concatenate(scores)
  107. nboxes, nclasses, nscores = [], [], []
  108. for c in set(classes):
  109. inds = np.where(classes == c)
  110. b = boxes[inds]
  111. c = classes[inds]
  112. s = scores[inds]
  113. keep = nms_boxes(b, s)
  114. nboxes.append(b[keep])
  115. nclasses.append(c[keep])
  116. nscores.append(s[keep])
  117. if not nclasses and not nscores:
  118. return None, None, None
  119. boxes = np.concatenate(nboxes)
  120. classes = np.concatenate(nclasses)
  121. scores = np.concatenate(nscores)
  122. return boxes, classes, scores
  123. def draw(image, boxes, scores, classes):
  124. """Draw the boxes on the image.
  125. # Argument:
  126. image: original image.
  127. boxes: ndarray, boxes of objects.
  128. classes: ndarray, classes of objects.
  129. scores: ndarray, scores of objects.
  130. all_classes: all classes name.
  131. """
  132. for box, score, cl in zip(boxes, scores, classes):
  133. top, left, right, bottom = box
  134. # print('class: {}, score: {}'.format(CLASSES[cl], score))
  135. # print('box coordinate left,top,right,down: [{}, {}, {}, {}]'.format(top, left, right, bottom))
  136. top = int(top)
  137. left = int(left)
  138. right = int(right)
  139. bottom = int(bottom)
  140. cv2.rectangle(image, (top, left), (right, bottom), (255, 0, 0), 2)
  141. cv2.putText(image, '{0} {1:.2f}'.format(CLASSES[cl], score),
  142. (top, left - 6),
  143. cv2.FONT_HERSHEY_SIMPLEX,
  144. 0.6, (0, 0, 255), 2)
  145. def letterbox(im, new_shape=(640, 640), color=(0, 0, 0)):
  146. # Resize and pad image while meeting stride-multiple constraints
  147. shape = im.shape[:2] # current shape [height, width]
  148. if isinstance(new_shape, int):
  149. new_shape = (new_shape, new_shape)
  150. # Scale ratio (new / old)
  151. r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
  152. # Compute padding
  153. ratio = r, r # width, height ratios
  154. new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
  155. dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
  156. dw /= 2 # divide padding into 2 sides
  157. dh /= 2
  158. if shape[::-1] != new_unpad: # resize
  159. im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
  160. top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
  161. left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
  162. im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
  163. return im, ratio, (dw, dh)
  164. if __name__ == '__main__':
  165. RKNN_MODEL = 'yolov5s.rknn'
  166. IMG_PATH = './bus.jpg'
  167. BOX_THRESH = 0.5
  168. NMS_THRESH = 0.45
  169. IMG_SIZE = 640
  170. # Create RKNN object
  171. rknn = RKNN()
  172. ret = rknn.load_rknn(RKNN_MODEL)
  173. # init runtime environment
  174. print('--> Init runtime environment')
  175. ret = rknn.init_runtime(target='rk1808')
  176. if ret != 0:
  177. print('Init runtime environment failed')
  178. exit(ret)
  179. print('done')
  180. # Set inputs
  181. img = cv2.imread(IMG_PATH)
  182. img, ratio, (dw, dh) = letterbox(img, new_shape=(IMG_SIZE, IMG_SIZE))
  183. img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
  184. img = cv2.resize(img,(IMG_SIZE, IMG_SIZE))
  185. # Inference
  186. print('--> Running model')
  187. t1 = time.time()
  188. outputs = rknn.inference(inputs=[img])
  189. t2 = time.time()
  190. print('DET_inf_time:', t2 - t1)
  191. # post process
  192. input0_data = outputs[0]
  193. input1_data = outputs[1]
  194. input2_data = outputs[2]
  195. input0_data = input0_data.reshape([3, 80, 80, 85])
  196. input1_data = input1_data.reshape([3, 40, 40, 85])
  197. input2_data = input2_data.reshape([3, 20, 20, 85])
  198. input_data = list()
  199. input_data.append(np.transpose(input0_data, (1, 2, 0, 3)))
  200. input_data.append(np.transpose(input1_data, (1, 2, 0, 3)))
  201. input_data.append(np.transpose(input2_data, (1, 2, 0, 3)))
  202. boxes, classes, scores = yolov5_post_process(input_data)
  203. t3 = time.time()
  204. print('post_process_time:',t3 - t2)
  205. img_1 = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
  206. if boxes is not None:
  207. draw(img_1, boxes, scores, classes)
  208. # cv2.imshow("post process result", img_1)
  209. # cv2.waitKeyEx(0)
  210. cv2.imwrite('result.jpg', img_1)
  211. rknn.release()

推理结果如下,结果还是挺不错的

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/579073
推荐阅读
相关标签
  

闽ICP备14008679号