当前位置:   article > 正文

【Yolov8 Opencv C++系列保姆教程】Yolov8 opencv c++ 版本保姆教程,Yolov8训练自己的数据集,实现红绿灯识别及红绿灯故障检测 ,红绿灯故障识别。_yolo红绿灯检测

yolo红绿灯检测

目录

一、Yolov8简介

1、yolov8 源码地址:

2、官方文档:

3、预训练模型百度网盘地址:

二、模型训练

1、标定红绿灯数据:

2、训练环境:

3、数据转化:

4、构造训练数据:

5、训练样本:

三、验证模型:

1、图像测试:

2、视频测试:

四、导出ONNX

五、Opencv实现Yolov8 C++ 识别

1、开发环境:

2、main函数代码:

3、yolov8 头文件inference.h代码:

4、yolov8 cpp文件inference.cpp代码:


一、Yolov8简介

1、yolov8 源码地址:

工程链接:https://github.com/ultralytics/ultralytics

2、官方文档:

CLI - Ultralytics YOLOv8 Docs

3、预训练模型百度网盘地址:

训练时需要用到,下载的网址较慢:

如果模型下载不了,加QQ:187100248.

链接: https://pan.baidu.com/s/1YfMxRPGk8LF75a4cbgYxGg 提取码: rd7b

二、模型训练

1、标定红绿灯数据:

         类别为23类,分别为:

红绿灯类别
red_lightgreen_lightyellow_lightoff_lightpart_ry_lightpart_rg_light
part_yg_lightryg_lightcountdown_off_lightcountdown_on_lightshade_lightzero
onetwothreefourfivesix
seveneightninebrokeNumberbrokenLight

        标注工具地址:AI标注工具Labelme和LabelImage Labelme和LabelImage集成工具_labelimage与labelme-CSDN博客

标注后图像格式

2、训练环境:

1)、Ubuntu18.04;

2)、Cuda11.7 + CUDNN8.0.6;

3)、opencv4.5.5;

4)、PyTorch1.8.1-GPU;

5)、python3.9

3、数据转化:

 1)、需要把上面标定的数据集中的.xml文件转换为.txt,转换代码为:

  1. import os
  2. import shutil
  3. import xml.etree.ElementTree as ET
  4. from xml.etree.ElementTree import Element, SubElement
  5. from PIL import Image
  6. import cv2
  7. classes = ['red_light', 'green_light', 'yellow_light', 'off_light', 'part_ry_light', 'part_rg_light', 'part_yg_light', 'ryg_light',
  8. 'countdown_off_light', 'countdown_on_light','shade_light','zero','one','two','three','four','five','six','seven',
  9. 'eight','nine','brokeNumber','brokenLight']
  10. class Xml_make(object):
  11. def __init__(self):
  12. super().__init__()
  13. def __indent(self, elem, level=0):
  14. i = "\n" + level * "\t"
  15. if len(elem):
  16. if not elem.text or not elem.text.strip():
  17. elem.text = i + "\t"
  18. if not elem.tail or not elem.tail.strip():
  19. elem.tail = i
  20. for elem in elem:
  21. self.__indent(elem, level + 1)
  22. if not elem.tail or not elem.tail.strip():
  23. elem.tail = i
  24. else:
  25. if level and (not elem.tail or not elem.tail.strip()):
  26. elem.tail = i
  27. def _imageinfo(self, list_top):
  28. annotation_root = ET.Element('annotation')
  29. annotation_root.set('verified', 'no')
  30. tree = ET.ElementTree(annotation_root)
  31. '''
  32. 0:xml_savepath 1:folder,2:filename,3:path
  33. 4:checked,5:width,6:height,7:depth
  34. '''
  35. folder_element = ET.Element('folder')
  36. folder_element.text = list_top[1]
  37. annotation_root.append(folder_element)
  38. filename_element = ET.Element('filename')
  39. filename_element.text = list_top[2]
  40. annotation_root.append(filename_element)
  41. path_element = ET.Element('path')
  42. path_element.text = list_top[3]
  43. annotation_root.append(path_element)
  44. # checked_element = ET.Element('checked')
  45. # checked_element.text = list_top[4]
  46. # annotation_root.append(checked_element)
  47. source_element = ET.Element('source')
  48. database_element = SubElement(source_element, 'database')
  49. database_element.text = 'Unknown'
  50. annotation_root.append(source_element)
  51. size_element = ET.Element('size')
  52. width_element = SubElement(size_element, 'width')
  53. width_element.text = str(list_top[5])
  54. height_element = SubElement(size_element, 'height')
  55. height_element.text = str(list_top[6])
  56. depth_element = SubElement(size_element, 'depth')
  57. depth_element.text = str(list_top[7])
  58. annotation_root.append(size_element)
  59. segmented_person_element = ET.Element('segmented')
  60. segmented_person_element.text = '0'
  61. annotation_root.append(segmented_person_element)
  62. return tree, annotation_root
  63. def _bndbox(self, annotation_root, list_bndbox):
  64. for i in range(0, len(list_bndbox), 9):
  65. object_element = ET.Element('object')
  66. name_element = SubElement(object_element, 'name')
  67. name_element.text = list_bndbox[i]
  68. # flag_element = SubElement(object_element, 'flag')
  69. # flag_element.text = list_bndbox[i + 1]
  70. pose_element = SubElement(object_element, 'pose')
  71. pose_element.text = list_bndbox[i + 2]
  72. truncated_element = SubElement(object_element, 'truncated')
  73. truncated_element.text = list_bndbox[i + 3]
  74. difficult_element = SubElement(object_element, 'difficult')
  75. difficult_element.text = list_bndbox[i + 4]
  76. bndbox_element = SubElement(object_element, 'bndbox')
  77. xmin_element = SubElement(bndbox_element, 'xmin')
  78. xmin_element.text = str(list_bndbox[i + 5])
  79. ymin_element = SubElement(bndbox_element, 'ymin')
  80. ymin_element.text = str(list_bndbox[i + 6])
  81. xmax_element = SubElement(bndbox_element, 'xmax')
  82. xmax_element.text = str(list_bndbox[i + 7])
  83. ymax_element = SubElement(bndbox_element, 'ymax')
  84. ymax_element.text = str(list_bndbox[i + 8])
  85. annotation_root.append(object_element)
  86. return annotation_root
  87. def txt_to_xml(self, list_top, list_bndbox):
  88. tree, annotation_root = self._imageinfo(list_top)
  89. annotation_root = self._bndbox(annotation_root, list_bndbox)
  90. self.__indent(annotation_root)
  91. tree.write(list_top[0], encoding='utf-8', xml_declaration=True)
  92. def txt_2_xml(source_path, xml_save_dir, jpg_save_dir,txt_dir):
  93. COUNT = 0
  94. for folder_path_tuple, folder_name_list, file_name_list in os.walk(source_path):
  95. for file_name in file_name_list:
  96. file_suffix = os.path.splitext(file_name)[-1]
  97. if file_suffix != '.jpg':
  98. continue
  99. list_top = []
  100. list_bndbox = []
  101. path = os.path.join(folder_path_tuple, file_name)
  102. xml_save_path = os.path.join(xml_save_dir, file_name.replace(file_suffix, '.xml'))
  103. txt_path = os.path.join(txt_dir, file_name.replace(file_suffix, '.txt'))
  104. filename = file_name#os.path.splitext(file_name)[0]
  105. checked = 'NO'
  106. #print(file_name)
  107. im = Image.open(path)
  108. im_w = im.size[0]
  109. im_h = im.size[1]
  110. shutil.copy(path, jpg_save_dir)
  111. if im_w*im_h > 34434015:
  112. print(file_name)
  113. if im_w < 100:
  114. print(file_name)
  115. width = str(im_w)
  116. height = str(im_h)
  117. depth = '3'
  118. flag = 'rectangle'
  119. pose = 'Unspecified'
  120. truncated = '0'
  121. difficult = '0'
  122. list_top.extend([xml_save_path, folder_path_tuple, filename, path, checked, width, height, depth])
  123. for line in open(txt_path, 'r'):
  124. line = line.strip()
  125. info = line.split(' ')
  126. name = classes[int(info[0])]
  127. x_cen = float(info[1]) * im_w
  128. y_cen = float(info[2]) * im_h
  129. w = float(info[3]) * im_w
  130. h = float(info[4]) * im_h
  131. xmin = int(x_cen - w / 2) - 1
  132. ymin = int(y_cen - h / 2) - 1
  133. xmax = int(x_cen + w / 2) + 3
  134. ymax = int(y_cen + h / 2) + 3
  135. if xmin < 0:
  136. xmin = 0
  137. if ymin < 0:
  138. ymin = 0
  139. if xmax > im_w - 1:
  140. xmax = im_w - 1
  141. if ymax > im_h - 1:
  142. ymax = im_h - 1
  143. if w > 5 and h > 5:
  144. list_bndbox.extend([name, flag, pose, truncated, difficult,str(xmin), str(ymin), str(xmax), str(ymax)])
  145. if xmin < 0 or xmax > im_w - 1 or ymin < 0 or ymax > im_h - 1:
  146. print(xml_save_path)
  147. Xml_make().txt_to_xml(list_top, list_bndbox)
  148. COUNT += 1
  149. #print(COUNT, xml_save_path)
  150. if __name__ == "__main__":
  151. out_xml_path = "/home/TL_TrainData/" # .xml输出文件存放地址
  152. out_jpg_path = "/home/TL_TrainData/" # .jpg输出文件存放地址
  153. txt_path = "/home/Data/TrafficLight/trainData" # yolov3标注.txt和图片文件夹
  154. images_path = "/home/TrafficLight/trainData" # image文件存放地址
  155. txt_2_xml(images_path, out_xml_path, out_jpg_path, txt_path)

4、构造训练数据:

2)、训练样本数据构造,需要把分成images和labels,images下面放入图片,labels下面放入.txt文件:

分成images和labels
分成images和labels
images
labels

5、训练样本:

 1)、首先安装训练包:

pip install ultralytics

2)、修改训练数据参数coco128_light.yaml文件,这个是自己修改的。

  1. # Ultralytics YOLO
    声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/花生_TL007/article/detail/587951
    推荐阅读
    相关标签