当前位置:   article > 正文

用python调用YOLOV8预测视频并解析结果----错误解决_yolov8无法处理视频

yolov8无法处理视频

1 相关引用

同济子豪兄关键点检测教程视频

同济子豪兄的GitHub代码参考

提出问题的小伙伴的博客

2 问题描述

本节调用了YOLOV8的预训练模型来对视频进行预测,采用的是python的API,并将关键点检测的结果可视化。在未更改代码之前,跑出来的效果如图所示。如果检测到的点数少于16,会被自动映射到原点。

 

要注意在同济子豪兄的源码中,以下这句代码要加上.data才能正常运行,否则会发生报错。

results[0].keypoints.data.cpu().numpy().astype('uint32')

3 问题解决 

对代码进行了解析,想到了一种解决方法。首先,映射到了原点可能是因为原点也作为关键点被检测了出来并且进行了链接绘制,假如我们只取检测到的关键点进行绘制连线,应该可以解决这个问题。所以我们加上一句代码,获取检查的关键点的数量。后面画框的关键点的时候只去画检测到的关键点。

  1. # 该框所有关键点坐标和置信度
  2. bbox_keypoints = bboxes_keypoints[idx]
  3. # 检查关键点的数量
  4. num_keypoints = bbox_keypoints.shape[0]
  1. # 画该框的关键点
  2. for kpt_id in range(num_keypoints):
  3. # 获取该关键点的颜色、半径、XY坐标
  4. kpt_color = kpt_color_map[kpt_id]['color']
  5. kpt_radius = kpt_color_map[kpt_id]['radius']
  6. kpt_x = int(bbox_keypoints[kpt_id][0])
  7. kpt_y = int(bbox_keypoints[kpt_id][1])
  8. # 画圆:图片、XY坐标、半径、颜色、线宽(-1为填充)
  9. img_bgr = cv2.circle(img_bgr, (kpt_x, kpt_y), kpt_radius, kpt_color, -1)
  10. # 写关键点类别文字:图片,文字字符串,文字左上角坐标,字体,字体大小,颜色,字体粗细
  11. kpt_label = str(kpt_id) # 写关键点类别 ID(二选一)
  12. # kpt_label = str(kpt_color_map[kpt_id]['name']) # 写关键点类别名称(二选一)
  13. img_bgr = cv2.putText(img_bgr, kpt_label,
  14. (kpt_x + kpt_labelstr['offset_x'], kpt_y + kpt_labelstr['offset_y']),
  15. cv2.FONT_HERSHEY_SIMPLEX, kpt_labelstr['font_size'], kpt_color,
  16. kpt_labelstr['font_thickness'])
  17. return img_bgr

但是发现原点的关键点还是被检测了出来,于是思考能否加上一个置信度的判断条件,将置信度低的检测到的关键点给筛出掉。这里选择置信度大于0.5的关键点进行链接,

bbox_keypoints[srt_kpt_id][2] > 0.5 and
bbox_keypoints[dst_kpt_id][2] > 0.5
  1. # 画该框的骨架连接
  2. for skeleton in skeleton_map:
  3. # 获取起始点坐标
  4. srt_kpt_id = skeleton['srt_kpt_id']
  5. # 获取终止点坐标
  6. dst_kpt_id = skeleton['dst_kpt_id']
  7. if (
  8. srt_kpt_id < num_keypoints and
  9. dst_kpt_id < num_keypoints and
  10. bbox_keypoints[srt_kpt_id][2] > 0.5 and
  11. bbox_keypoints[dst_kpt_id][2] > 0.5
  12. ):
  13. # 获取起始点坐标
  14. srt_kpt_x = int(bbox_keypoints[srt_kpt_id][0])
  15. srt_kpt_y = int(bbox_keypoints[srt_kpt_id][1])
  16. # 获取终止点坐标
  17. dst_kpt_x = int(bbox_keypoints[dst_kpt_id][0])
  18. dst_kpt_y = int(bbox_keypoints[dst_kpt_id][1])
  19. # 获取骨架连接颜色
  20. skeleton_color = skeleton['color']
  21. # 获取骨架连接线宽
  22. skeleton_thickness = skeleton['thickness']
  23. # 画骨架连接
  24. img_bgr = cv2.line(img_bgr, (srt_kpt_x, srt_kpt_y), (dst_kpt_x, dst_kpt_y), color=skeleton_color,
  25. thickness=skeleton_thickness)

但是我们按照这个代码运行发现原点的关键点还是会被连接起来。这里的原因在于在前面results[0].keypoints.data.cpu().numpy().astype('uint32')这一句代码,我们将他转成整型的时候,连着置信度也转成整型了,因为置信度都小于1,转过去就都变成了0。所以无论设置什么阈值,都还是会画到原点的连接。所以在这里,我们进行代码的修改,使得置信度这个参数不化成整型。 最后再将各个参数量拼成整体bboxes_keypoints。

  1. # 关键点的 xy 坐标
  2. bboxes_keypoints_position = results[0].keypoints.data[:, :, :2].cpu().numpy().astype('uint32')
  3. confidence = results[0].keypoints.data[:, :, 2].cpu().numpy()
  4. bboxes_keypoints = np.concatenate([bboxes_keypoints_position, confidence[:, :, None]], axis=2)

 注意,进行以上修改之后,后续画关键点和画连接,获取参数信息的时候,最好都强制转换为整型,否则容易报错。

  1. # 获取起始点坐标
  2. srt_kpt_x = int(bbox_keypoints[srt_kpt_id][0])
  3. srt_kpt_y = int(bbox_keypoints[srt_kpt_id][1])
  4. # 获取终止点坐标
  5. dst_kpt_x = int(bbox_keypoints[dst_kpt_id][0])
  6. dst_kpt_y = int(bbox_keypoints[dst_kpt_id][1])
  1. kpt_x = int(bbox_keypoints[kpt_id][0])
  2. kpt_y = int(bbox_keypoints[kpt_id][1])

 进行以上修改后,就不会出现映射到原点并连接的问题了,如果还是出现了问题,可以尝试去调整一下前面提到的置信度阈值,我的是0.5,可以按实际情况进行调整。本人修改之后跑出来的效果是这样的:

 4 完整代码

  1. import cv2
  2. import numpy as np
  3. import time
  4. from tqdm import tqdm
  5. from ultralytics import YOLO
  6. import matplotlib.pyplot as plt
  7. import torch
  8. # 有 GPU 就用 GPU,没有就用 CPU
  9. device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
  10. print('device:', device)
  11. # 载入预训练模型
  12. # model = YOLO('yolov8n-pose.pt')
  13. # model = YOLO('yolov8s-pose.pt')
  14. # model = YOLO('yolov8m-pose.pt')
  15. # model = YOLO('yolov8l-pose.pt')
  16. # model = YOLO('yolov8x-pose.pt')
  17. model = YOLO('yolov8x-pose-p6.pt')
  18. # 切换计算设备
  19. model.to(device)
  20. # model.cpu() # CPU
  21. # model.cuda() # GPU
  22. # 框(rectangle)可视化配置
  23. bbox_color = (150, 0, 0) # 框的 BGR 颜色
  24. bbox_thickness = 2 # 框的线宽
  25. # 框类别文字
  26. bbox_labelstr = {
  27. 'font_size':1, # 字体大小
  28. 'font_thickness':2, # 字体粗细
  29. 'offset_x':0, # X 方向,文字偏移距离,向右为正
  30. 'offset_y':-10, # Y 方向,文字偏移距离,向下为正
  31. }
  32. # 关键点 BGR 配色
  33. kpt_color_map = {
  34. 0:{'name':'Nose', 'color':[0, 0, 255], 'radius':6}, # 鼻尖
  35. 1:{'name':'Right Eye', 'color':[255, 0, 0], 'radius':6}, # 右边眼睛
  36. 2:{'name':'Left Eye', 'color':[255, 0, 0], 'radius':6}, # 左边眼睛
  37. 3:{'name':'Right Ear', 'color':[0, 255, 0], 'radius':6}, # 右边耳朵
  38. 4:{'name':'Left Ear', 'color':[0, 255, 0], 'radius':6}, # 左边耳朵
  39. 5:{'name':'Right Shoulder', 'color':[193, 182, 255], 'radius':6}, # 右边肩膀
  40. 6:{'name':'Left Shoulder', 'color':[193, 182, 255], 'radius':6}, # 左边肩膀
  41. 7:{'name':'Right Elbow', 'color':[16, 144, 247], 'radius':6}, # 右侧胳膊肘
  42. 8:{'name':'Left Elbow', 'color':[16, 144, 247], 'radius':6}, # 左侧胳膊肘
  43. 9:{'name':'Right Wrist', 'color':[1, 240, 255], 'radius':6}, # 右侧手腕
  44. 10:{'name':'Left Wrist', 'color':[1, 240, 255], 'radius':6}, # 左侧手腕
  45. 11:{'name':'Right Hip', 'color':[140, 47, 240], 'radius':6}, # 右侧胯
  46. 12:{'name':'Left Hip', 'color':[140, 47, 240], 'radius':6}, # 左侧胯
  47. 13:{'name':'Right Knee', 'color':[223, 155, 60], 'radius':6}, # 右侧膝盖
  48. 14:{'name':'Left Knee', 'color':[223, 155, 60], 'radius':6}, # 左侧膝盖
  49. 15:{'name':'Right Ankle', 'color':[139, 0, 0], 'radius':6}, # 右侧脚踝
  50. 16:{'name':'Left Ankle', 'color':[139, 0, 0], 'radius':6}, # 左侧脚踝
  51. }
  52. # 点类别文字
  53. kpt_labelstr = {
  54. 'font_size':0.5, # 字体大小
  55. 'font_thickness':1, # 字体粗细
  56. 'offset_x':10, # X 方向,文字偏移距离,向右为正
  57. 'offset_y':0, # Y 方向,文字偏移距离,向下为正
  58. }
  59. # 骨架连接 BGR 配色
  60. skeleton_map = [
  61. {'srt_kpt_id':15, 'dst_kpt_id':13, 'color':[0, 100, 255], 'thickness':2}, # 右侧脚踝-右侧膝盖
  62. {'srt_kpt_id':13, 'dst_kpt_id':11, 'color':[0, 255, 0], 'thickness':2}, # 右侧膝盖-右侧胯
  63. {'srt_kpt_id':16, 'dst_kpt_id':14, 'color':[255, 0, 0], 'thickness':2}, # 左侧脚踝-左侧膝盖
  64. {'srt_kpt_id':14, 'dst_kpt_id':12, 'color':[0, 0, 255], 'thickness':2}, # 左侧膝盖-左侧胯
  65. {'srt_kpt_id':11, 'dst_kpt_id':12, 'color':[122, 160, 255], 'thickness':2}, # 右侧胯-左侧胯
  66. {'srt_kpt_id':5, 'dst_kpt_id':11, 'color':[139, 0, 139], 'thickness':2}, # 右边肩膀-右侧胯
  67. {'srt_kpt_id':6, 'dst_kpt_id':12, 'color':[237, 149, 100], 'thickness':2}, # 左边肩膀-左侧胯
  68. {'srt_kpt_id':5, 'dst_kpt_id':6, 'color':[152, 251, 152], 'thickness':2}, # 右边肩膀-左边肩膀
  69. {'srt_kpt_id':5, 'dst_kpt_id':7, 'color':[148, 0, 69], 'thickness':2}, # 右边肩膀-右侧胳膊肘
  70. {'srt_kpt_id':6, 'dst_kpt_id':8, 'color':[0, 75, 255], 'thickness':2}, # 左边肩膀-左侧胳膊肘
  71. {'srt_kpt_id':7, 'dst_kpt_id':9, 'color':[56, 230, 25], 'thickness':2}, # 右侧胳膊肘-右侧手腕
  72. {'srt_kpt_id':8, 'dst_kpt_id':10, 'color':[0,240, 240], 'thickness':2}, # 左侧胳膊肘-左侧手腕
  73. {'srt_kpt_id':1, 'dst_kpt_id':2, 'color':[224,255, 255], 'thickness':2}, # 右边眼睛-左边眼睛
  74. {'srt_kpt_id':0, 'dst_kpt_id':1, 'color':[47,255, 173], 'thickness':2}, # 鼻尖-左边眼睛
  75. {'srt_kpt_id':0, 'dst_kpt_id':2, 'color':[203,192,255], 'thickness':2}, # 鼻尖-左边眼睛
  76. {'srt_kpt_id':1, 'dst_kpt_id':3, 'color':[196, 75, 255], 'thickness':2}, # 右边眼睛-右边耳朵
  77. {'srt_kpt_id':2, 'dst_kpt_id':4, 'color':[86, 0, 25], 'thickness':2}, # 左边眼睛-左边耳朵
  78. {'srt_kpt_id':3, 'dst_kpt_id':5, 'color':[255,255, 0], 'thickness':2}, # 右边耳朵-右边肩膀
  79. {'srt_kpt_id':4, 'dst_kpt_id':6, 'color':[255, 18, 200], 'thickness':2} # 左边耳朵-左边肩膀
  80. ]
  81. def process_frame(img_bgr):
  82. '''
  83. 输入摄像头画面 bgr-array,输出图像 bgr-array
  84. '''
  85. results = model(img_bgr, verbose=False) # verbose设置为False,不单独打印每一帧预测结果
  86. # 预测框的个数
  87. num_bbox = len(results[0].boxes.cls)
  88. # 预测框的 xyxy 坐标
  89. bboxes_xyxy = results[0].boxes.xyxy.cpu().numpy().astype('uint32')
  90. # 关键点的 xy 坐标
  91. bboxes_keypoints_position = results[0].keypoints.data[:, :, :2].cpu().numpy().astype('uint32')
  92. confidence = results[0].keypoints.data[:, :, 2].cpu().numpy()
  93. bboxes_keypoints = np.concatenate([bboxes_keypoints_position, confidence[:, :, None]], axis=2)
  94. for idx in range(num_bbox): # 遍历每个框
  95. # 获取该框坐标
  96. bbox_xyxy = bboxes_xyxy[idx]
  97. # 获取框的预测类别(对于关键点检测,只有一个类别)
  98. bbox_label = results[0].names[0]
  99. # 画框
  100. img_bgr = cv2.rectangle(img_bgr, (bbox_xyxy[0], bbox_xyxy[1]), (bbox_xyxy[2], bbox_xyxy[3]), bbox_color,
  101. bbox_thickness)
  102. # 写框类别文字:图片,文字字符串,文字左上角坐标,字体,字体大小,颜色,字体粗细
  103. img_bgr = cv2.putText(img_bgr, bbox_label,
  104. (bbox_xyxy[0] + bbox_labelstr['offset_x'], bbox_xyxy[1] + bbox_labelstr['offset_y']),
  105. cv2.FONT_HERSHEY_SIMPLEX, bbox_labelstr['font_size'], bbox_color,
  106. bbox_labelstr['font_thickness'])
  107. bbox_keypoints = bboxes_keypoints[idx] # 该框所有关键点坐标和置信度
  108. # 检查关键点的数量
  109. num_keypoints = bbox_keypoints.shape[0]
  110. # 画该框的骨架连接
  111. for skeleton in skeleton_map:
  112. # 获取起始点坐标
  113. srt_kpt_id = skeleton['srt_kpt_id']
  114. # 获取终止点坐标
  115. dst_kpt_id = skeleton['dst_kpt_id']
  116. if (
  117. srt_kpt_id < num_keypoints and
  118. dst_kpt_id < num_keypoints and
  119. bbox_keypoints[srt_kpt_id][2] > 0.5 and
  120. bbox_keypoints[dst_kpt_id][2] > 0.5
  121. ):
  122. # 获取起始点坐标
  123. srt_kpt_x = int(bbox_keypoints[srt_kpt_id][0])
  124. srt_kpt_y = int(bbox_keypoints[srt_kpt_id][1])
  125. # 获取终止点坐标
  126. dst_kpt_x = int(bbox_keypoints[dst_kpt_id][0])
  127. dst_kpt_y = int(bbox_keypoints[dst_kpt_id][1])
  128. # 获取骨架连接颜色
  129. skeleton_color = skeleton['color']
  130. # 获取骨架连接线宽
  131. skeleton_thickness = skeleton['thickness']
  132. # 画骨架连接
  133. img_bgr = cv2.line(img_bgr, (srt_kpt_x, srt_kpt_y), (dst_kpt_x, dst_kpt_y), color=skeleton_color,
  134. thickness=skeleton_thickness)
  135. # 画该框的关键点
  136. for kpt_id in range(num_keypoints):
  137. # 获取该关键点的颜色、半径、XY坐标
  138. kpt_color = kpt_color_map[kpt_id]['color']
  139. kpt_radius = kpt_color_map[kpt_id]['radius']
  140. kpt_x = int(bbox_keypoints[kpt_id][0])
  141. kpt_y = int(bbox_keypoints[kpt_id][1])
  142. # 画圆:图片、XY坐标、半径、颜色、线宽(-1为填充)
  143. img_bgr = cv2.circle(img_bgr, (kpt_x, kpt_y), kpt_radius, kpt_color, -1)
  144. # 写关键点类别文字:图片,文字字符串,文字左上角坐标,字体,字体大小,颜色,字体粗细
  145. kpt_label = str(kpt_id) # 写关键点类别 ID(二选一)
  146. # kpt_label = str(kpt_color_map[kpt_id]['name']) # 写关键点类别名称(二选一)
  147. img_bgr = cv2.putText(img_bgr, kpt_label,
  148. (kpt_x + kpt_labelstr['offset_x'], kpt_y + kpt_labelstr['offset_y']),
  149. cv2.FONT_HERSHEY_SIMPLEX, kpt_labelstr['font_size'], kpt_color,
  150. kpt_labelstr['font_thickness'])
  151. return img_bgr
  152. # 视频逐帧处理代码模板
  153. # 不需修改任何代码,只需定义process_frame函数即可
  154. # 同济子豪兄 2021-7-10
  155. def generate_video(input_path='videos/robot.mp4'):
  156. filehead = input_path.split('/')[-1]
  157. output_path = "out-" + filehead
  158. print('视频开始处理', input_path)
  159. # 获取视频总帧数
  160. cap = cv2.VideoCapture(input_path)
  161. frame_count = 0
  162. while (cap.isOpened()):
  163. success, frame = cap.read()
  164. frame_count += 1
  165. if not success:
  166. break
  167. cap.release()
  168. print('视频总帧数为', frame_count)
  169. # cv2.namedWindow('Crack Detection and Measurement Video Processing')
  170. cap = cv2.VideoCapture(input_path)
  171. frame_size = (cap.get(cv2.CAP_PROP_FRAME_WIDTH), cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
  172. # fourcc = int(cap.get(cv2.CAP_PROP_FOURCC))
  173. # fourcc = cv2.VideoWriter_fourcc(*'XVID')
  174. fourcc = cv2.VideoWriter_fourcc(*'mp4v')
  175. fps = cap.get(cv2.CAP_PROP_FPS)
  176. out = cv2.VideoWriter(output_path, fourcc, fps, (int(frame_size[0]), int(frame_size[1])))
  177. # 进度条绑定视频总帧数
  178. with tqdm(total=frame_count - 1) as pbar:
  179. try:
  180. while (cap.isOpened()):
  181. success, frame = cap.read()
  182. if not success:
  183. break
  184. # 处理帧
  185. # frame_path = './temp_frame.png'
  186. # cv2.imwrite(frame_path, frame)
  187. try:
  188. frame = process_frame(frame)
  189. except:
  190. print('error')
  191. pass
  192. if success == True:
  193. # cv2.imshow('Video Processing', frame)
  194. out.write(frame)
  195. # 进度条更新一帧
  196. pbar.update(1)
  197. # if cv2.waitKey(1) & 0xFF == ord('q'):
  198. # break
  199. except:
  200. print('中途中断')
  201. pass
  202. cv2.destroyAllWindows()
  203. out.release()
  204. cap.release()
  205. print('视频已保存', output_path)
  206. generate_video(input_path='videos/two.mp4')
  207. #generate_video(input_path='videos/lym.mp4')
  208. #generate_video(input_path='videos/cxk.mp4')

5 感谢

感谢同济子豪兄的教程视频以及GitHub代码教学。子豪兄csdn主页

感谢提出问题的小伙伴。又又土主页 

第一次写博客,如果有错误的地方,欢迎批评和指正,一起讨论交流学习。

谢谢大家! 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小丑西瓜9/article/detail/245443
推荐阅读
相关标签
  

闽ICP备14008679号