当前位置:   article > 正文

记录一下最近复现motionBERT和Alphapose遇到的问题_alphapose推理卡住

alphapose推理卡住

一、Alphapose

首先是Alphapose,遇到的问题是:1.加载到100%的时候卡住

2.Opencv报错:Failed to load OpenH264 library: openh264-1.8.0-win64.dll

首先解决第一个问题,根据警告提示,Userwarning: Failed to load image Python extension:warn(f"Failed to load image python extension: ie}")

这个警告表明在加载图像时,torchvision 在尝试使用某个 Python 扩展时遇到了问题。这可能会影响图像加载的性能或功能。

(1)检查 Pillow 或 PIL 安装:

torchvision 使用 Pillow(或其前身 PIL)库来处理图像。确保你的环境中安装了 Pillow,并且它是最新版本:

pip install --upgrade Pillow

如果你已经使用了 Pillow,确保版本在兼容的范围内,不会导致与 torchvision 不兼容的问题。

(2)检查依赖项:

pip install --upgrade torchvision,pip install --upgrade torch...

第二个警告

这个警告表明你正在使用的 torchvision 模型中的 pretrained 参数已经被弃用,而且在将来的版本中可能会被移除。替代参数是 weights。感觉影响不大。

这是我先对警告的一个修改,然后发现卡住的原因和视频的格式以及pytorch、cuda、torchvision

有关,这仨要版本对应。其次,我将mp4格式的换成了avi格式发现能用了(视频的格式会有影响),但是会报个错。

(3)Opencv报错:Failed to load OpenH264 library: openh264-1.8.0-win64.dll 解决方法

Releases · cisco/openh264 · GitHub下载相关的dll文件,然后把它放进C:\Windows\System32就好了。

参考了MotionBert论文解读及详细复现教程-CSDN博客

对alphapose进行测试

python scripts/demo_inference.py --cfg configs/halpe_26/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/halpe26_fast_res50_256x192.pth --indir examples/demo/ --save_img

并导入视频

python scripts/demo_inference.py --cfg configs/halpe_26/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/halpe26_fast_res50_256x192.pth --video examples/demo/test_video.mp4 --save_video

二、然后motionbert遇到的问题是

  1. RuntimeError:
  2. An attempt has been made to start a new process before the
  3. current process has finished its bootstrapping phase.
  4. This probably means that you are not using fork to start your
  5. child processes and you have forgotten to use the proper idiom
  6. in the main module:
  7. if __name__ == '__main__':
  8. freeze_support()
  9. ...
  10. The "freeze_support()" line can be omitted if the program
  11. is not going to be frozen to produce an executable.

原因可能是linux行的windows不一定行呀。根据资料查阅,有两个解决方法

参考RuntimeError: An attempt has been made to start a new process before the current pr_has beenmade-CSDN博客

解决RuntimeError: An attempt has been made to start a new process before...办法_yolov8 runtimeerror:-CSDN博客1.加main函数,在main中调用,继续多进程加载,可以加速

  1. import os
  2. import numpy as np
  3. import argparse
  4. from tqdm import tqdm
  5. import imageio
  6. import torch
  7. import torch.nn as nn
  8. from torch.utils.data import DataLoader
  9. from lib.utils.tools import *
  10. from lib.utils.learning import *
  11. from lib.utils.utils_data import flip_data
  12. from lib.data.dataset_wild import WildDetDataset
  13. from lib.utils.vismo import render_and_save
  14. def parse_args():
  15. parser = argparse.ArgumentParser()
  16. parser.add_argument("--config", type=str, default="configs/pose3d/MB_ft_h36m_global_lite.yaml", help="Path to the config file.")
  17. parser.add_argument('-e', '--evaluate', default='checkpoint/pose3d/FT_MB_lite_MB_ft_h36m_global_lite/best_epoch.bin', type=str, metavar='FILENAME', help='checkpoint to evaluate (file name)')
  18. parser.add_argument('-j', '--json_path', type=str, help='alphapose detection result json path')
  19. parser.add_argument('-v', '--vid_path', type=str, help='video path')
  20. parser.add_argument('-o', '--out_path', type=str, help='output path')
  21. parser.add_argument('--pixel', action='store_true', help='align with pixle coordinates')
  22. parser.add_argument('--focus', type=int, default=None, help='target person id')
  23. parser.add_argument('--clip_len', type=int, default=243, help='clip length for network input')
  24. opts = parser.parse_args()
  25. return opts
  26. def main(argv=None):
  27. opts = parse_args()
  28. args = get_config(opts.config)
  29. model_backbone = load_backbone(args)
  30. if torch.cuda.is_available():
  31. model_backbone = nn.DataParallel(model_backbone)
  32. model_backbone = model_backbone.cuda()
  33. print('Loading checkpoint', opts.evaluate)
  34. checkpoint = torch.load(opts.evaluate, map_location=lambda storage, loc: storage)
  35. model_backbone.load_state_dict(checkpoint['model_pos'], strict=True)
  36. model_pos = model_backbone
  37. model_pos.eval()
  38. testloader_params = {
  39. 'batch_size': 1,
  40. 'shuffle': False,
  41. 'num_workers': 8,
  42. 'pin_memory': True,
  43. 'prefetch_factor': 4,
  44. 'persistent_workers': True,
  45. 'drop_last': False
  46. }
  47. vid = imageio.get_reader(opts.vid_path, 'ffmpeg')
  48. fps_in = vid.get_meta_data()['fps']
  49. vid_size = vid.get_meta_data()['size']
  50. os.makedirs(opts.out_path, exist_ok=True)
  51. if opts.pixel:
  52. # Keep relative scale with pixel coornidates
  53. wild_dataset = WildDetDataset(opts.json_path, clip_len=opts.clip_len, vid_size=vid_size, scale_range=None, focus=opts.focus)
  54. else:
  55. # Scale to [-1,1]
  56. wild_dataset = WildDetDataset(opts.json_path, clip_len=opts.clip_len, scale_range=[1,1], focus=opts.focus)
  57. test_loader = DataLoader(wild_dataset, **testloader_params)
  58. results_all = []
  59. with torch.no_grad():
  60. for batch_input in tqdm(test_loader):
  61. N, T = batch_input.shape[:2]
  62. if torch.cuda.is_available():
  63. batch_input = batch_input.cuda()
  64. if args.no_conf:
  65. batch_input = batch_input[:, :, :, :2]
  66. if args.flip:
  67. batch_input_flip = flip_data(batch_input)
  68. predicted_3d_pos_1 = model_pos(batch_input)
  69. predicted_3d_pos_flip = model_pos(batch_input_flip)
  70. predicted_3d_pos_2 = flip_data(predicted_3d_pos_flip) # Flip back
  71. predicted_3d_pos = (predicted_3d_pos_1 + predicted_3d_pos_2) / 2.0
  72. else:
  73. predicted_3d_pos = model_pos(batch_input)
  74. if args.rootrel:
  75. predicted_3d_pos[:,:,0,:]=0 # [N,T,17,3]
  76. else:
  77. predicted_3d_pos[:,0,0,2]=0
  78. pass
  79. if args.gt_2d:
  80. predicted_3d_pos[...,:2] = batch_input[...,:2]
  81. results_all.append(predicted_3d_pos.cpu().numpy())
  82. results_all = np.hstack(results_all)
  83. results_all = np.concatenate(results_all)
  84. render_and_save(results_all, '%s/X3D.mp4' % (opts.out_path), keep_imgs=False, fps=fps_in)
  85. if opts.pixel:
  86. # Convert to pixel coordinates
  87. results_all = results_all * (min(vid_size) / 2.0)
  88. results_all[:,:,:2] = results_all[:,:,:2] + np.array(vid_size) / 2.0
  89. np.save('%s/X3D.npy' % (opts.out_path), results_all)
  90. #def main():
  91. # 主要功能代码
  92. #print(1)
  93. # freeze_support()
  94. if __name__ == '__main__':
  95. sys.exit(main())
  96. #freeze_support()

2.num_workers改为0,单进程加载。(不太行)直接报错

"Anconda\envs\MotionBERT\lib\site-packages\torch\utils\data\dataloader.py", line 236, in __init__ raise ValueError('prefetch_factor option could only be specified in multiprocessing.'ValueError: prefetch_factor option could only be specified in multiprocessing.let num_workers > 0 to enable multiprocessing.

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号