当前位置:   article > 正文

OpenCV实现人体动作识别_opencv人体特征识别

opencv人体特征识别

 

版本:

注意:如果是opencv-python  3.3会报错,cv2.dnn  找不到 readNet()

 

 

对于识别的行为超过400种:

OpenCV官方示例的样本类别:
https://github.com/opencv/opencv/blob/master/samples/data/dnn/action_recongnition_kinetics.txt 

示例代码:https://github.com/opencv/opencv/blob/master/samples/dnn/action_recognition.py

 

项目目录结构:

 

完整代码:

 

  1. # 执行以下命令:
  2. # python activity_recognition_demo.py --model resnet-34_kinetics.onnx --classes action_recognition_kinetics.txt --input videos/activities.mp4
  3. from collections import deque
  4. import numpy as np
  5. import argparse
  6. import imutils
  7. import cv2
  8. # 构造参数
  9. ap = argparse.ArgumentParser()
  10. ap.add_argument(
  11. "-m",
  12. "--model",
  13. required=True,
  14. help="path to trained human activity recognition model")
  15. ap.add_argument(
  16. "-c", "--classes", required=True, help="path to class labels file")
  17. ap.add_argument(
  18. "-i", "--input", type=str, default="", help="optional path to video file")
  19. args = vars(ap.parse_args())
  20. # 类别,样本持续时间(帧数),样本大小(空间尺寸)
  21. CLASSES = open(args["classes"]).read().strip().split("\n")
  22. SAMPLE_DURATION = 16
  23. SAMPLE_SIZE = 112
  24. print("处理中...")
  25. # 创建帧队列
  26. frames = deque(maxlen=SAMPLE_DURATION)
  27. # 读取模型
  28. net = cv2.dnn.readNet(args["model"])
  29. # 待检测视频
  30. vs = cv2.VideoCapture(args["input"] if args["input"] else 0)
  31. writer = None
  32. # 循环处理视频流
  33. while True:
  34. # 读取每帧
  35. (grabbed, frame) = vs.read()
  36. # 判断视频是否结束
  37. if not grabbed:
  38. print("无视频读取...")
  39. break
  40. # 调整大小,放入队列中
  41. frame = imutils.resize(frame, width=640)
  42. frames.append(frame)
  43. # 判断是否填充到最大帧数
  44. if len(frames) < SAMPLE_DURATION:
  45. continue
  46. # 队列填充满后继续处理
  47. blob = cv2.dnn.blobFromImages(
  48. frames,
  49. 1.0, (SAMPLE_SIZE, SAMPLE_SIZE), (114.7748, 107.7354, 99.4750),
  50. swapRB=True,
  51. crop=True)
  52. blob = np.transpose(blob, (1, 0, 2, 3))
  53. blob = np.expand_dims(blob, axis=0)
  54. # 识别预测
  55. net.setInput(blob)
  56. outputs = net.forward()
  57. label = CLASSES[np.argmax(outputs)]
  58. # 绘制框
  59. cv2.rectangle(frame, (0, 0), (300, 40), (255, 0, 0), -1)
  60. cv2.putText(frame, label, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
  61. (0, 0, 255), 2)
  62. # cv2.imshow("Activity Recognition", frame)
  63. # 检测是否保存
  64. if writer is None:
  65. # 初始化视频写入器
  66. # fourcc = cv2.VideoWriter_fourcc(*"MJPG")
  67. fourcc = cv2.VideoWriter_fourcc(*"mp4v")
  68. writer = cv2.VideoWriter(
  69. "videos\\test.mp4",
  70. fourcc, 30, (frame.shape[1], frame.shape[0]), True)
  71. writer.write(frame)
  72. # 按 q 键退出
  73. # key = cv2.waitKey(1) & 0xFF
  74. # if key == ord("q"):
  75. # break
  76. print("结束...")
  77. writer.release()
  78. vs.release()

可能与我找的视频有关,有些测试效果不是很好。

测试结果:

 

 模型下载地址:

链接:https://pan.baidu.com/s/17mQvUr6jsUyd2k0RrbrXaA 
提取码:irho 
 

本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号