当前位置:   article > 正文

RK3568笔记二十八:CRNN车牌识别部署

RK3568笔记二十八:CRNN车牌识别部署

若该文为原创文章,转载请注明原文出处。

想在RK3568上实现车牌识别,想到的方法是使用LPRNet网络识别或CRNN识别,本篇记录使用CRNN识别车牌,也可以换成LPRNet模型,原理一样。

一、平台介绍

1、训练平台:Autodl

2、运行板子:ATK-DLRK3568

二、环境搭建

使用Autodl平台的GPU训练测试

1、创建虚拟环境

conda create -n cnn_plate_env python=3.8

2、激活

conda activate cnn_plate_env

3、安装依赖项

  1. pip install easydict -i https://pypi.tuna.tsinghua.edu.cn/simple
  2. pip install pyyaml -i https://pypi.tuna.tsinghua.edu.cn/simple
  3. pip install opencv-python -i https://pypi.tuna.tsinghua.edu.cn/simple
  4. pip install tensorboardX -i https://pypi.tuna.tsinghua.edu.cn/simple
  5. pip install imgaug -i https://pypi.tuna.tsinghua.edu.cn/simple
  6. pip install torch -i https://pypi.tuna.tsinghua.edu.cn/simple
  7. pip install torchvision -i https://pypi.tuna.tsinghua.edu.cn/simple
  8. pip install tensorboardX -i https://pypi.tuna.tsinghua.edu.cn/simple

4、下载源码

  1. # 源码地址
  2. https://github.com/we0091234/crnn_plate_recognition

5、准备数据集

车牌识别数据集CCPD+CRPD

  1. 从CCPD和CRPD截下来的车牌小图以及我自己收集的一部分车牌 

  2. 数据集打上标签,生成train.txt和val.txt

如有需要,联系博主。

然后执行如下命令,得到train.txt和val.txt

  1. python plateLabel.py --image_path your/train/img/path/ --label_file datasets/train.txt
  2. python plateLabel.py --image_path your/val/img/path/ --label_file datasets/val.txt

数据格式如下:

6、修改配置文件

将train.txt val.txt路径写入lib/config/360CC_config.yaml 中

JSON_FILE: {'train': '/root/crnn_plate_recognition/datasets/train.txt', 'val': '/root/crnn_plate_recognition/datasets/val.txt'}

注意路径

7、修改trian.py

由于安装的ymal版本过高,会导致无法加载。所以需要修改加载方式。

出错:TypeError: load() missing 1 required positional argument: 'Loader'

处理:TypeError: load() missing 1 required positional argument: ‘Loader‘-CSDN博客

8、训练

 python train.py --cfg lib/config/360CC_config.yaml

默认100轮,计时2-3小时,结果保存再output文件夹中

9、测试

python demo.py --model_path saved_model/best.pth --image_path images/test.jpg

三、导出ONNX并推理测试

导出需要安装onnx

  1. pip install onnx
  2. pip install onnxsim

1、导出ONNX

python export.py --weights saved_model/best.pth --save_path saved_model/best.onnx  --simplify

2、推理测试

python onnx_infer.py --onnx_file saved_model/best.onnx  --image_path images/test.jpg

四、转成RKNN

转成RKNN和部署需要用到rknn-toolkit2-1.5.0

把训练转换后的onn拷贝到同级目录下,执行下面转换程序,即可导出RKNN

recongniton_cov_rknn.py

  1. import numpy as np
  2. import cv2
  3. from rknn.api import RKNN
  4. from PIL import Image, ImageDraw, ImageFont
  5. if __name__ == '__main__':
  6. # Create RKNN object
  7. rknn2 = RKNN(verbose=False)
  8. # pre-process config
  9. print('--> Config model')
  10. rknn2.config(mean_values=[[150.54, 150.54, 150.54]], std_values=[[49.215, 49.215, 49.215]], target_platform="rk3568")
  11. print('done')
  12. # Load ONNX model
  13. print('--> Loading model')
  14. ret = rknn2.load_onnx(model="./best.onnx")
  15. if ret != 0:
  16. print('Load model failed!')
  17. exit(ret)
  18. print('done')
  19. # Build model
  20. print('--> Building model')
  21. ret = rknn2.build(do_quantization=False)
  22. if ret != 0:
  23. print('Build model failed!')
  24. exit(ret)
  25. print('done')
  26. # Export RKNN model
  27. print('--> Export rknn model')
  28. ret = rknn2.export_rknn("./best.rknn")
  29. if ret != 0:
  30. print('Export rknn model failed!')
  31. exit(ret)
  32. print('done')
  33. # Init runtime environment
  34. print('--> Init runtime environment')
  35. ret = rknn2.init_runtime()
  36. if ret != 0:
  37. print('Init runtime environment failed!')
  38. exit(ret)
  39. print('done')
  40. rknn2.release()

五、部署测试

把转换后的RKNN模型拷贝到虚拟机内。

测试RKNN模型

  1. import numpy as np
  2. import cv2
  3. from rknn.api import RKNN
  4. from PIL import Image, ImageDraw, ImageFont
  5. plate_chr = "#京沪津渝冀晋蒙辽吉黑苏浙皖闽赣鲁豫鄂湘粤桂琼川贵云藏陕甘青宁新学警港澳挂使领民航危0123456789ABCDEFGHJKLMNPQRSTUVWXYZ险品"
  6. OBJ_THRESH = 0.25
  7. NMS_THRESH = 0.45
  8. IMG_SIZE = 640
  9. CLASSES = ("plate", )
  10. def decodePlate(preds): # 识别后处理
  11. pre = 0
  12. newPreds = []
  13. for i in range(len(preds[0])):
  14. if (preds[0][i] != 0).all() and (preds[0][i] != pre).all():
  15. newPreds.append(preds[0][i])
  16. pre = preds[0][i]
  17. plate = ""
  18. for i in newPreds:
  19. plate += plate_chr[int(i)]
  20. print(plate)
  21. return plate
  22. def cv2ImgAddText(img, text, left, top, textColor=(0, 255, 0), textSize=20):
  23. if (isinstance(img, np.ndarray)): # 判断是否OpenCV图片类型
  24. img = Image.fromarray(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
  25. # 创建一个可以在给定图像上绘图的对象
  26. draw = ImageDraw.Draw(img)
  27. # 字体的格式
  28. fontStyle = ImageFont.truetype(
  29. "simsun.ttc", textSize, encoding="utf-8")
  30. # 绘制文本
  31. draw.text((left, top), text, textColor, font=fontStyle)
  32. # 转换回OpenCV格式
  33. return cv2.cvtColor(np.asarray(img), cv2.COLOR_RGB2BGR)
  34. def sigmoid(x):
  35. # return 1 / (1 + np.exp(-x))
  36. return x
  37. def xywh2xyxy(x):
  38. # Convert [x, y, w, h] to [x1, y1, x2, y2]
  39. y = np.copy(x)
  40. y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
  41. y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
  42. y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
  43. y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
  44. return y
  45. def process(input, mask, anchors):
  46. anchors = [anchors[i] for i in mask]
  47. grid_h, grid_w = map(int, input.shape[0:2])
  48. box_confidence = sigmoid(input[..., 4])
  49. box_confidence = np.expand_dims(box_confidence, axis=-1)
  50. box_class_probs = sigmoid(input[..., 5:])
  51. box_xy = sigmoid(input[..., :2])*2 - 0.5
  52. col = np.tile(np.arange(0, grid_w), grid_w).reshape(-1, grid_w)
  53. row = np.tile(np.arange(0, grid_h).reshape(-1, 1), grid_h)
  54. col = col.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
  55. row = row.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
  56. grid = np.concatenate((col, row), axis=-1)
  57. box_xy += grid
  58. box_xy *= int(IMG_SIZE/grid_h)
  59. box_wh = pow(sigmoid(input[..., 2:4])*2, 2)
  60. box_wh = box_wh * anchors
  61. box = np.concatenate((box_xy, box_wh), axis=-1)
  62. return box, box_confidence, box_class_probs
  63. def filter_boxes(boxes, box_confidences, box_class_probs):
  64. """Filter boxes with box threshold. It's a bit different with origin yolov5 post process!
  65. # Arguments
  66. boxes: ndarray, boxes of objects.
  67. box_confidences: ndarray, confidences of objects.
  68. box_class_probs: ndarray, class_probs of objects.
  69. # Returns
  70. boxes: ndarray, filtered boxes.
  71. classes: ndarray, classes for boxes.
  72. scores: ndarray, scores for boxes.
  73. """
  74. boxes = boxes.reshape(-1, 4)
  75. box_confidences = box_confidences.reshape(-1)
  76. box_class_probs = box_class_probs.reshape(-1, box_class_probs.shape[-1])
  77. _box_pos = np.where(box_confidences >= OBJ_THRESH)
  78. boxes = boxes[_box_pos]
  79. box_confidences = box_confidences[_box_pos]
  80. box_class_probs = box_class_probs[_box_pos]
  81. class_max_score = np.max(box_class_probs, axis=-1)
  82. classes = np.argmax(box_class_probs, axis=-1)
  83. _class_pos = np.where(class_max_score >= OBJ_THRESH)
  84. boxes = boxes[_class_pos]
  85. classes = classes[_class_pos]
  86. scores = (class_max_score* box_confidences)[_class_pos]
  87. return boxes, classes, scores
  88. def nms_boxes(boxes, scores):
  89. """Suppress non-maximal boxes.
  90. # Arguments
  91. boxes: ndarray, boxes of objects.
  92. scores: ndarray, scores of objects.
  93. # Returns
  94. keep: ndarray, index of effective boxes.
  95. """
  96. x = boxes[:, 0]
  97. y = boxes[:, 1]
  98. w = boxes[:, 2] - boxes[:, 0]
  99. h = boxes[:, 3] - boxes[:, 1]
  100. areas = w * h
  101. order = scores.argsort()[::-1]
  102. keep = []
  103. while order.size > 0:
  104. i = order[0]
  105. keep.append(i)
  106. xx1 = np.maximum(x[i], x[order[1:]])
  107. yy1 = np.maximum(y[i], y[order[1:]])
  108. xx2 = np.minimum(x[i] + w[i], x[order[1:]] + w[order[1:]])
  109. yy2 = np.minimum(y[i] + h[i], y[order[1:]] + h[order[1:]])
  110. w1 = np.maximum(0.0, xx2 - xx1 + 0.00001)
  111. h1 = np.maximum(0.0, yy2 - yy1 + 0.00001)
  112. inter = w1 * h1
  113. ovr = inter / (areas[i] + areas[order[1:]] - inter)
  114. inds = np.where(ovr <= NMS_THRESH)[0]
  115. order = order[inds + 1]
  116. keep = np.array(keep)
  117. return keep
  118. def yolov5_post_process(input_data):
  119. masks = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
  120. anchors = [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45],
  121. [59, 119], [116, 90], [156, 198], [373, 326]]
  122. boxes, classes, scores = [], [], []
  123. for input, mask in zip(input_data, masks):
  124. b, c, s = process(input, mask, anchors)
  125. b, c, s = filter_boxes(b, c, s)
  126. boxes.append(b)
  127. classes.append(c)
  128. scores.append(s)
  129. boxes = np.concatenate(boxes)
  130. boxes = xywh2xyxy(boxes)
  131. classes = np.concatenate(classes)
  132. scores = np.concatenate(scores)
  133. nboxes, nclasses, nscores = [], [], []
  134. for c in set(classes):
  135. inds = np.where(classes == c)
  136. b = boxes[inds]
  137. c = classes[inds]
  138. s = scores[inds]
  139. keep = nms_boxes(b, s)
  140. nboxes.append(b[keep])
  141. nclasses.append(c[keep])
  142. nscores.append(s[keep])
  143. if not nclasses and not nscores:
  144. return None, None, None
  145. boxes = np.concatenate(nboxes)
  146. classes = np.concatenate(nclasses)
  147. scores = np.concatenate(nscores)
  148. return boxes, classes, scores
  149. def draw(image, boxes, scores, classes):
  150. """Draw the boxes on the image.
  151. # Argument:
  152. image: original image.
  153. boxes: ndarray, boxes of objects.
  154. classes: ndarray, classes of objects.
  155. scores: ndarray, scores of objects.
  156. all_classes: all classes name.
  157. """
  158. for box, score, cl in zip(boxes, scores, classes):
  159. top, left, right, bottom = box
  160. # print('class: {}, score: {}'.format(CLASSES[cl], score))
  161. # print('box coordinate left,top,right,down: [{}, {}, {}, {}]'.format(top, left, right, bottom))
  162. top = int(top)
  163. left = int(left)
  164. right = int(right)
  165. bottom = int(bottom)
  166. cv2.rectangle(image, (top, left), (right, bottom), (255, 0, 0), 2)
  167. cv2.putText(image, '{0} {1:.2f}'.format(CLASSES[cl], score),
  168. (top, left - 6),
  169. cv2.FONT_HERSHEY_SIMPLEX,
  170. 0.6, (0, 0, 255), 2)
  171. def letterbox(im, new_shape=(640, 640), color=(0, 0, 0)):
  172. # Resize and pad image while meeting stride-multiple constraints
  173. shape = im.shape[:2] # current shape [height, width]
  174. if isinstance(new_shape, int):
  175. new_shape = (new_shape, new_shape)
  176. # Scale ratio (new / old)
  177. r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
  178. # Compute padding
  179. ratio = r, r # width, height ratios
  180. new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
  181. dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
  182. dw /= 2 # divide padding into 2 sides
  183. dh /= 2
  184. if shape[::-1] != new_unpad: # resize
  185. im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
  186. top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
  187. left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
  188. im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
  189. return im, ratio, (dw, dh)
  190. if __name__ == '__main__':
  191. # Create RKNN object
  192. rknn1 = RKNN(verbose=False)
  193. rknn2 = RKNN(verbose=False)
  194. # pre-process config
  195. print('--> Config model')
  196. rknn1.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform="rk3568")
  197. rknn2.config(mean_values=[[150.54, 150.54, 150.54]], std_values=[[49.215, 49.215, 49.215]], target_platform="rk3568")
  198. print('done')
  199. # Load ONNX model
  200. print('--> Loading model')
  201. ret = rknn1.load_onnx(model="./yolov5n.onnx")
  202. ret = rknn2.load_onnx(model="./best.onnx")
  203. if ret != 0:
  204. print('Load model failed!')
  205. exit(ret)
  206. print('done')
  207. # Build model
  208. print('--> Building model')
  209. ret = rknn1.build(do_quantization=True, dataset="dataset.txt")
  210. ret = rknn2.build(do_quantization=False)
  211. if ret != 0:
  212. print('Build model failed!')
  213. exit(ret)
  214. print('done')
  215. # Export RKNN model
  216. print('--> Export rknn model')
  217. ret = rknn1.export_rknn("./yolov5n.rknn")
  218. ret = rknn2.export_rknn("./best.rknn")
  219. if ret != 0:
  220. print('Export rknn model failed!')
  221. exit(ret)
  222. print('done')
  223. # Init runtime environment
  224. print('--> Init runtime environment')
  225. ret = rknn1.init_runtime()
  226. ret = rknn2.init_runtime()
  227. if ret != 0:
  228. print('Init runtime environment failed!')
  229. exit(ret)
  230. print('done')
  231. # Set inputs
  232. img = cv2.imread("./car.jpg")
  233. # img, ratio, (dw, dh) = letterbox(img, new_shape=(IMG_SIZE, IMG_SIZE))
  234. img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
  235. img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))
  236. # Inference
  237. print('--> Running model')
  238. outputs = rknn1.inference(inputs=[img])
  239. print('done')
  240. # post process
  241. input0_data = outputs[0]
  242. input1_data = outputs[1]
  243. input2_data = outputs[2]
  244. input0_data = input0_data.reshape([3, -1]+list(input0_data.shape[-2:]))
  245. input1_data = input1_data.reshape([3, -1]+list(input1_data.shape[-2:]))
  246. input2_data = input2_data.reshape([3, -1]+list(input2_data.shape[-2:]))
  247. input_data = list()
  248. input_data.append(np.transpose(input0_data, (2, 3, 0, 1)))
  249. input_data.append(np.transpose(input1_data, (2, 3, 0, 1)))
  250. input_data.append(np.transpose(input2_data, (2, 3, 0, 1)))
  251. boxes, classes, scores = yolov5_post_process(input_data)
  252. img_1 = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
  253. if boxes is not None:
  254. print(boxes)
  255. # draw(img_1, boxes, scores, classes)
  256. for box in boxes:
  257. x1, y1, x2, y2 = box
  258. x1 = int(x1)
  259. x2 = int(x2)
  260. y1 = int(y1)
  261. y2 = int(y2)
  262. cv2.rectangle(img_1, (x1, y1), (x2, y2), (0, 255, 0), 1, 1)
  263. roi = img[y1:y2, x1:x2]
  264. try:
  265. roi = cv2.resize(roi, (168, 48))
  266. print('333')
  267. output = rknn2.inference(inputs=[roi])
  268. input_data = np.swapaxes(output[0], 1, 2)
  269. index = np.argmax(input_data, axis=1)
  270. plate_no = decodePlate(index)
  271. img_1 = cv2ImgAddText(img_1, str(plate_no), x1, y1-30, (0, 255, 0), 30)
  272. print(str(plate_no))
  273. except:
  274. continue
  275. # show output
  276. cv2.imshow("post process result", img_1)
  277. cv2.waitKey(0)
  278. cv2.destroyAllWindows()
  279. rknn1.release()
  280. rknn2.release()

程序大概思路是:

1、读取图片 

2、加载模型

3、检测车牌

4、识别车牌

5、显示图片 

上面是在虚拟机模拟运行,结果正常。

六、C++板载部署

自行部署,参考讯为电子的程序,可以部署成功。

如有侵权,或需要完整代码,请及时联系博主。

本文内容由网友自发贡献,转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/669623
推荐阅读
相关标签
  

闽ICP备14008679号