当前位置:   article > 正文

单人的姿态检测|tensorflow singlepose_tensorlflow 姿态分析 模型

tensorlflow 姿态分析 模型

 单人姿态检测-图片

 

特此声明非本人图片,如果侵权,联系我,我会删掉。

 安装所用的包

!pip install tensorflow==2.4.1 tensorflow-gpu==2.4.1 tensorflow-hub opencv-python matplotlib

导入下面包

tensorflow_hub: 加载模型

CV2: 利用openCV的包,画点,直线,或者其它图片和视频相关的东西

  1. import tensorflow as tf
  2. import tensorflow_hub as hub
  3. import cv2
  4. from matplotlib import pyplot as plt
  5. import numpy as np

加载和执行模型, 返回17个姿态的结果

  1. def movenet(input_image):
  2. """Runs detection on an input image.
  3. Args:
  4. input_image: A [1, height, width, 3] tensor represents the input image
  5. pixels. Note that the height/width should already be resized and match the
  6. expected input resolution of the model before passing into this function.
  7. Returns:
  8. A [1, 1, 17, 3] float numpy array representing the predicted keypoint
  9. coordinates and scores.
  10. """
  11. # Download the model from TF Hub.
  12. model = hub.load("https://tfhub.dev/google/movenet/singlepose/lightning/4")
  13. model = model.signatures['serving_default']
  14. # SavedModel format expects tensor type of int32.
  15. input_image = tf.cast(input_image, dtype=tf.int32)
  16. # Run model inference.
  17. outputs = model(input_image)
  18. # Output is a [1, 1, 17, 3] tensor.
  19. keypoints_with_scores = outputs['output_0'].numpy()
  20. keypoints_with_scores = keypoints_with_scores.reshape((1, 17, 3))
  21. return keypoints_with_scores

可以访问Tensorflow的官网,下载singlepose和Multipose相关的模型或查看例子程序

https://tfhub.dev/s?module-type=image-pose-detection

画17个姿态的点

  1. def draw_keypoints(frame, keypoints, confidence_threshold):
  2. y, x, c = frame.shape
  3. shaped = np.squeeze(np.multiply(keypoints, [y, x, 1]))
  4. print("shaped in draw_keypoints:", shaped)
  5. for kp in shaped:
  6. ky, kx, kp_conf = kp
  7. if kp_conf > confidence_threshold:
  8. cv2.circle(frame, (int(kx), int(ky)), 6, (0, 255, 0), -1)

画姿态点之间的直线

下面的值告诉我们如何连接人体姿态点。例如,第一个值 (0, 1): 'm' 告诉我们鼻子如何连接到左眼,最后一个值 (14, 16): 'c' 告诉我们右膝如何连接连接到右脚踝。

下面是17个人体姿态点顺序,从0到16。

nose, left eye, right eye, left ear, right ear, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, right knee, left ankle, right ankle

  1. EDGES = {
  2. (0, 1): 'm',
  3. (0, 2): 'c',
  4. (1, 3): 'm',
  5. (2, 4): 'c',
  6. (0, 5): 'm',
  7. (0, 6): 'c',
  8. (5, 7): 'm',
  9. (7, 9): 'm',
  10. (6, 8): 'c',
  11. (8, 10): 'c',
  12. (5, 6): 'y',
  13. (5, 11): 'm',
  14. (6, 12): 'c',
  15. (11, 12): 'y',
  16. (11, 13): 'm',
  17. (13, 15): 'm',
  18. (12, 14): 'c',
  19. (14, 16): 'c'
  20. }

函数draw_connections是17个姿态之间怎么连接

  1. def draw_connections(frame, keypoints, edges, confidence_threshold):
  2. print('frame', frame)
  3. y, x, c = frame.shape
  4. shaped = np.squeeze(np.multiply(keypoints, [y, x, 1]))
  5. for edge, color in edges.items():
  6. p1, p2 = edge
  7. y1, x1, c1 = shaped[p1]
  8. y2, x2, c2 = shaped[p2]
  9. if (c1 > confidence_threshold) & (c2 > confidence_threshold):
  10. cv2.line(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 0, 255), 4)

画人的每一个姿态点和线

  1. def loop_through_people(frame, keypoints_with_scores, edges, confidence_threshold):
  2. for person in keypoints_with_scores:
  3. draw_connections(frame, person, edges, confidence_threshold)
  4. draw_keypoints(frame, person, confidence_threshold)

加载自己的图片

  1. image_path = 'fitness_pic.jpg'
  2. image = tf.io.read_file(image_path)
  3. image = tf.compat.v1.image.decode_jpeg(image)

把它转成(192,192)大小

注意:

1. 高度和宽度是32个倍数。

2. 高度和宽度的比例要尽可能接近原图片的比例。

3. 高度和宽度不能大于256. 例如,应调整 720p 图像(即 720x1280 (HxW))的大小并填充为 160x256 图像。

我们这个例子简单点,大小就是的(192, 192)

  1. # Resize and pad the image to keep the aspect ratio and fit the expected size.
  2. input_size = 192
  3. input_image = tf.expand_dims(image, axis=0)
  4. input_image = tf.image.resize_with_pad(input_image, input_size, input_size)

运行模型推理。得出的keypoints_with_scores是[1, 17, 3].

第一个维度是批次维度,始终等于 1。
第二个维度表示预测的边界框/关键点位置和分数。前 17 * 3 个元素是关键点位置和分数,格式为:[y_0, x_0, s_0, y_1, x_1, s_1, ..., y_16, x_16, s_16],其中 y_i, x_i, s_i 是 yx 坐标 (归一化到图像帧,例如[0.0, 1.0]中的范围)和相应的第i个关节的置信度分数。 17个关键点关节的顺序为:[鼻子、左眼、右眼、左耳、右耳、左肩、右肩、左肘、右肘、左腕、右腕、左髋、右髋、左膝、右膝、左脚踝、右脚踝]。

  1. # Run model inference.
  2. keypoints_with_scores = movenet(input_image)

显示原始图片和标记了每个姿态点的图片

  1. display_image = tf.cast(tf.image.resize_with_pad(image, 1280, 1280), dtype = tf.int32)
  2. display_image = np.array(display_image)
  3. origin_image = np.copy(display_image)
  4. loop_through_people(display_image, keypoints_with_scores, EDGES, 0.1)
  5. plt.subplot(1, 2, 1)
  6. plt.imshow(origin_image)
  7. plt.subplot(1, 2, 2)
  8. plt.imshow(display_image)
  9. plt.show()

 

完整代码

  1. import tensorflow as tf
  2. import tensorflow_hub as hub
  3. import cv2
  4. from matplotlib import pyplot as plt
  5. import numpy as np
  6. def movenet(input_image):
  7. """Runs detection on an input image.
  8. Args:
  9. input_image: A [1, height, width, 3] tensor represents the input image
  10. pixels. Note that the height/width should already be resized and match the
  11. expected input resolution of the model before passing into this function.
  12. Returns:
  13. A [1, 1, 17, 3] float numpy array representing the predicted keypoint
  14. coordinates and scores.
  15. """
  16. # Download the model from TF Hub.
  17. model = hub.load("https://tfhub.dev/google/movenet/singlepose/lightning/4")
  18. model = model.signatures['serving_default']
  19. # SavedModel format expects tensor type of int32.
  20. input_image = tf.cast(input_image, dtype=tf.int32)
  21. # Run model inference.
  22. outputs = model(input_image)
  23. # Output is a [1, 1, 17, 3] tensor.
  24. keypoints_with_scores = outputs['output_0'].numpy()
  25. keypoints_with_scores = keypoints_with_scores.reshape((1, 17, 3))
  26. return keypoints_with_scores
  27. def draw_keypoints(frame, keypoints, confidence_threshold):
  28. y, x, c = frame.shape
  29. shaped = np.squeeze(np.multiply(keypoints, [y, x, 1]))
  30. print("shaped in draw_keypoints:", shaped)
  31. for kp in shaped:
  32. ky, kx, kp_conf = kp
  33. if kp_conf > confidence_threshold:
  34. cv2.circle(frame, (int(kx), int(ky)), 6, (0, 255, 0), -1)
  35. EDGES = {
  36. (0, 1): 'm',
  37. (0, 2): 'c',
  38. (1, 3): 'm',
  39. (2, 4): 'c',
  40. (0, 5): 'm',
  41. (0, 6): 'c',
  42. (5, 7): 'm',
  43. (7, 9): 'm',
  44. (6, 8): 'c',
  45. (8, 10): 'c',
  46. (5, 6): 'y',
  47. (5, 11): 'm',
  48. (6, 12): 'c',
  49. (11, 12): 'y',
  50. (11, 13): 'm',
  51. (13, 15): 'm',
  52. (12, 14): 'c',
  53. (14, 16): 'c'
  54. }
  55. def draw_connections(frame, keypoints, edges, confidence_threshold):
  56. print('frame', frame)
  57. y, x, c = frame.shape
  58. shaped = np.squeeze(np.multiply(keypoints, [y, x, 1]))
  59. for edge, color in edges.items():
  60. p1, p2 = edge
  61. y1, x1, c1 = shaped[p1]
  62. y2, x2, c2 = shaped[p2]
  63. if (c1 > confidence_threshold) & (c2 > confidence_threshold):
  64. cv2.line(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 0, 255), 4)
  65. def loop_through_people(frame, keypoints_with_scores, edges, confidence_threshold):
  66. for person in keypoints_with_scores:
  67. draw_connections(frame, person, edges, confidence_threshold)
  68. draw_keypoints(frame, person, confidence_threshold)
  69. image_path = 'C:/Users/Harry/Desktop/fitness.jpeg'
  70. image = tf.io.read_file(image_path)
  71. # image = tf.compat.v1.image.decode_image(image)
  72. image = tf.compat.v1.image.decode_jpeg(image)
  73. # Resize and pad the image to keep the aspect ratio and fit the expected size.
  74. input_size = 192
  75. input_image = tf.expand_dims(image, axis=0)
  76. input_image = tf.image.resize_with_pad(input_image, input_size, input_size)
  77. # Run model inference.
  78. keypoints_with_scores = movenet(input_image)
  79. display_image = tf.cast(tf.image.resize_with_pad(image, 1280, 1280), dtype = tf.int32)
  80. display_image = np.array(display_image)
  81. origin_image = np.copy(display_image)
  82. loop_through_people(display_image, keypoints_with_scores, EDGES, 0.1)
  83. plt.subplot(1, 2, 1)
  84. plt.imshow(origin_image)
  85. plt.subplot(1, 2, 2)
  86. plt.imshow(display_image)
  87. plt.show()

参考资料

https://tfhub.dev/google/movenet/singlepose/lightning/4

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家自动化/article/detail/463456
推荐阅读
相关标签
  

闽ICP备14008679号