赞
踩
分类:动作捕捉
github地址:https://github.com/openxrlab/xrmocap
所需环境:
Ubuntu18.04,conda22.9.0,CUDA11.4
numpy1.23.5
scipy1.10.0
mmcv-full1.6.1
mmdet2.27.0
mmpose0.29.0
xrmocap0.8.0
xrprimer==0.7.0
# 1.创建环境 conda create -n XRmocap python=3.7 -y conda activate XRmocap # 2.install ffmpeg for video and images conda install -y ffmpeg # 3.install pytorch conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch -y # 4.install pytorch3d conda install -y -c fvcore -c iopath -c conda-forge fvcore iopath conda install -y -c bottler nvidiacub conda install -y pytorch3d -c pytorch3d # 5.install mmcv-full #pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.12.1/index.html pip install mmcv-full==1.5.3 # 6.install minimal_pytorch_rasterizer pip install git+https://github.com/rmbashirov/minimal_pytorch_rasterizer.git # 7.install xrprimer pip install xrprimer # 8.install mmhuman3d pip install git+https://github.com/open-mmlab/mmhuman3d.git
# 9.安装cudnn的libcudnn与libcudnn-dev
# dpkg -L +软件包的名字,可以知道这个软件包包含了哪些文件
# 使用apt-get指令安装包:文档一般在 /usr/share;可执行文件 /usr/bin;配置文件 /etc;lib文件 /usr/lib
sudo apt-get update
sudo apt-get install -y --no-install-recommends libcudnn8=8.2.4.15-1+cuda11.4 libcudnn8-dev=8.2.4.15-1+cuda11.4
很奇怪sudo apt-get install -y --no-install-recommends libcudnn8=8.2.4.15-1+cuda11.4 libcudnn8-dev=8.2.4.15-1+cuda11.4
一直显示这个错误:
E: 未找到“libcudnn8”的“8.2.4.15-1+cuda11.4”版本
E: 未找到“libcudnn8-dev”的“8.2.4.15-1+cuda11.4”版本
这个错误的原因可能是因为CUDA与cudnn的版本问题,当前Ubuntu18.04系统内配置的CUDA11.4和cudnn8.5.0对应的libcudnn8并不是libcudnn8.2.4.15-1
。可以用以下命令查看适配的libcudnn8版本:
apt-cache policy libcudnn8
apt-cache policy libcudnn8-dev
所以只能让它自动安装libcudnn8.5.0.96-1+cuda11.7
了。(可是本机CUDA是11.4,cudnn是8.5.0,希望这个deb包向下兼容。具体影响未知。)
sudo apt-get install -y --no-install-recommends libcudnn8 libcudnn8-dev
# 如果运行上面的指令出现“您也许需要运行“apt --fix-broken install”来修正上面的错误。”时,请执行:sudo apt --fix-broken install,然后再执行上面的指令。
# 设定libcudnn8禁止自动更新
sudo apt-mark hold libcudnn8
# 清除软件包缓存信息(我觉得没必要执行)
# rm -rf /var/lib/apt/lists/*
# 查看依赖关系 确定安装完毕
dpkg -l libcudnn8
dpkg -l libcudnn8-dev
TensorRT需要预先下载,我选择的是这个tar包:TensorRT 8.6 GA for Linux x86_64 and CUDA 11.0, 11.1, 11.2, 11.3, 11.4, 11.5, 11.6, 11.7 and 11.8 TAR Package
(下载网址:https://developer.nvidia.com/nvidia-tensorrt-8x-download)
下载完成后,将tar包移动到Software_Anzhuang/CUDA中。
# 10.安装TensorRT
mv 下载/TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-11.8.tar.gz Software_Anzhuang/CUDA
cd Software_Anzhuang/CUDA
tar -xzvf TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-11.8.tar.gz
cd TensorRT-8.6.1.6/python
# XRmocap环境是python3.7,所以这里选择安装tensorrt-8.6.1-cp37-none-linux_x86_64.whl
pip install tensorrt-8.6.1-cp37-none-linux_x86_64.whl
# 11.install mmdeploy and build ops conda install cmake -y mkdir XRmocap && cd XRmocap mkdir mmdeploy && cd mmdeploy git clone https://github.com/open-mmlab/mmdeploy.git cd mmdeploy # git强制指定退回mmdeploy 0.12.0版本,因为作者只试过这个成功了,所以安装mmdeploy0.12.0 git reset --hard 1b048d88ca11782de1e9ebf6f9583259167a1d5b pip install -e . mkdir -p build && cd build # 请注意DTENSORRT_DIR是需要更改的,更改成第10步TensorRT解压后的路径。其余不用更改 # cmake配置 cmake -DCMAKE_CXX_COMPILER=g++ -DMMDEPLOY_TARGET_BACKENDS=trt \ -DTENSORRT_DIR=/home/sqy/Software_Anzhuang/CUDA/TensorRT-8.6.1.6 \ -DCUDNN_DIR=/usr/lib/x86_64-linux-gnu .. # 编译并安装(安装实际就是把src目录下的几个二进制文件复制到了系统的/usr/local/bin下面了而已) make -j8 && make install #用于清理旧的编译结果,以便重新开始编译。如果编译失败重新编译需要执行。如果编译成功则无需执行 # make clean
# 12.clone xrmocap
cd XRmocap && mkdir xrmocap && cd xrmocap
git clone https://github.com/openxrlab/xrmocap.git
cd xrmocap
# install requirements for build
pip install -r requirements/build.txt
# install requirements for runtime
pip install -r requirements/runtime.txt
# install requirements for services
pip install -r requirements/service.txt
# install xrmocap
rm -rf .eggs
pip install -e .
安装完毕
# packages in environment at /home/sqy/anaconda3/envs/XRmocap: # # Name Version Build Channel _libgcc_mutex 0.1 main _openmp_mutex 5.1 1_gnu absl-py 1.4.0 pypi_0 pypi addict 2.4.0 pypi_0 pypi aniposelib 0.3.9 pypi_0 pypi astropy 4.3.1 pypi_0 pypi attrs 23.1.0 pypi_0 pypi bidict 0.22.1 pypi_0 pypi blas 1.0 mkl brotlipy 0.7.0 py37h27cfd23_1003 bzip2 1.0.8 h7b6447c_0 c-ares 1.19.0 h5eee18b_0 ca-certificates 2023.05.30 h06a4308_0 cachelib 0.9.0 pypi_0 pypi cdflib 0.3.20 pypi_0 pypi certifi 2022.12.7 py37h06a4308_0 cffi 1.15.1 py37h5eee18b_3 cfgv 3.3.1 pypi_0 pypi charset-normalizer 2.0.4 pyhd3eb1b0_0 chumpy 0.70 pypi_0 pypi click 8.1.6 pypi_0 pypi cmake 3.22.1 h1fce559_0 colorama 0.4.6 pyhd8ed1ab_0 conda-forge colorlog 6.7.0 pypi_0 pypi colormap 1.0.4 pypi_0 pypi cryptography 39.0.1 py37h9ce1e76_0 cudatoolkit 11.3.1 h2bc3f7f_2 cycler 0.11.0 pypi_0 pypi cython 3.0.0 pypi_0 pypi deprecated 1.2.14 pypi_0 pypi dill 0.3.7 pypi_0 pypi distlib 0.3.7 pypi_0 pypi easydev 0.12.1 pypi_0 pypi einops 0.6.1 pypi_0 pypi expat 2.4.9 h6a678d5_0 ffmpeg 4.2.2 h20bf706_0 filelock 3.12.2 pypi_0 pypi filterpy 1.4.5 pypi_0 pypi flask 2.2.5 pypi_0 pypi flask-api 3.1 pypi_0 pypi flask-caching 2.0.2 pypi_0 pypi flask-cors 4.0.0 pypi_0 pypi flask-socketio 5.3.5 pypi_0 pypi flatbuffers 23.5.26 pypi_0 pypi flit-core 3.6.0 pyhd3eb1b0_0 fonttools 4.38.0 pypi_0 pypi freetype 2.12.1 h4a9f257_0 fvcore 0.1.5.post20210915 py37 fvcore giflib 5.2.1 h5eee18b_3 gmp 6.2.1 h295c915_3 gnutls 3.6.15 he1e5248_0 grpcio 1.57.0 pypi_0 pypi h11 0.14.0 pypi_0 pypi h5py 3.8.0 pypi_0 pypi identify 2.5.24 pypi_0 pypi idna 3.4 py37h06a4308_0 imageio 2.31.1 pypi_0 pypi importlib-metadata 6.7.0 pypi_0 pypi intel-openmp 2021.4.0 h06a4308_3561 iopath 0.1.9 py37 iopath itsdangerous 2.1.2 pypi_0 pypi jinja2 3.1.2 pypi_0 pypi jpeg 9e h5eee18b_1 json-tricks 3.17.2 pypi_0 pypi kiwisolver 1.4.4 pypi_0 pypi krb5 1.20.1 h568e23c_1 lame 3.100 h7b6447c_0 lcms2 2.12 h3be6417_0 ld_impl_linux-64 2.38 h1181459_1 lerc 3.0 h295c915_0 libcurl 8.1.1 h91b91d3_2 libdeflate 1.17 h5eee18b_0 libedit 3.1.20221030 h5eee18b_0 libev 4.33 h7f8727e_1 libffi 3.4.4 h6a678d5_0 libgcc-ng 11.2.0 h1234567_1 libgomp 11.2.0 h1234567_1 libidn2 2.3.4 h5eee18b_0 libnghttp2 1.52.0 ha637b67_1 libopus 1.3.1 h7b6447c_0 libpng 1.6.39 h5eee18b_0 libssh2 1.10.0 h37d81fd_2 libstdcxx-ng 11.2.0 h1234567_1 libtasn1 4.19.0 h5eee18b_0 libtiff 4.5.0 h6a678d5_2 libunistring 0.9.10 h27cfd23_0 libuv 1.44.2 h5eee18b_0 libvpx 1.7.0 h439df22_0 libwebp 1.2.4 h11a3e52_1 libwebp-base 1.2.4 h5eee18b_1 lz4-c 1.9.4 h6a678d5_0 markupsafe 2.1.3 pypi_0 pypi matplotlib 3.5.3 pypi_0 pypi mediapipe 0.9.0.1 pypi_0 pypi minimal-pytorch-rasterizer 0.5 pypi_0 pypi mkl 2021.4.0 h06a4308_640 mkl-service 2.4.0 py37h7f8727e_0 mkl_fft 1.3.1 py37hd3c417c_0 mkl_random 1.2.2 py37h51133e4_0 mmcv-full 1.5.3 pypi_0 pypi mmdeploy 0.12.0 dev_0 <develop> mmdet 2.27.0 pypi_0 pypi mmhuman3d 0.11.0 pypi_0 pypi mmpose 0.29.0 pypi_0 pypi multiprocess 0.70.15 pypi_0 pypi munkres 1.1.4 pypi_0 pypi ncurses 6.4 h6a678d5_0 nettle 3.7.3 hbbd107a_1 networkx 2.6.3 pypi_0 pypi nodeenv 1.8.0 pypi_0 pypi numpy 1.21.5 py37h6c91a56_3 numpy-base 1.21.5 py37ha15fc14_3 nvidiacub 1.10.0 0 bottler onnx 1.12.0 pypi_0 pypi opencv-contrib-python 4.8.0.76 pypi_0 pypi opencv-python 4.8.0.76 pypi_0 pypi openh264 2.1.1 h4ff587b_0 openssl 1.1.1v h7f8727e_0 packaging 23.1 pypi_0 pypi pandas 1.3.5 pypi_0 pypi pexpect 4.8.0 pypi_0 pypi pickle5 0.0.12 pypi_0 pypi pillow 9.4.0 py37h6a678d5_0 pip 22.3.1 py37h06a4308_0 platformdirs 3.10.0 pypi_0 pypi plyfile 0.9 pypi_0 pypi portalocker 1.4.0 py_0 conda-forge pre-commit 2.21.0 pypi_0 pypi prettytable 3.7.0 pypi_0 pypi protobuf 3.20.1 pypi_0 pypi ptyprocess 0.7.0 pypi_0 pypi pycocotools 2.0.7 pypi_0 pypi pycparser 2.21 pyhd3eb1b0_0 pyerfa 2.0.0.3 pypi_0 pypi pygments 2.16.1 pypi_0 pypi pyopenssl 23.0.0 py37h06a4308_0 pyparsing 3.1.1 pypi_0 pypi pysocks 1.7.1 py37_1 python 3.7.16 h7a1cb2a_0 python-dateutil 2.8.2 pypi_0 pypi python-engineio 4.5.1 pypi_0 pypi python-socketio 5.8.0 pypi_0 pypi python_abi 3.7 2_cp37m conda-forge pytorch 1.12.1 py3.7_cuda11.3_cudnn8.3.2_0 pytorch pytorch-mutex 1.0 cuda pytorch pytorch3d 0.7.1 py37_cu113_pyt1121 pytorch3d pytz 2023.3 pypi_0 pypi pywavelets 1.3.0 pypi_0 pypi pyyaml 6.0 py37h540881e_4 conda-forge readline 8.2 h5eee18b_0 requests 2.28.1 py37h06a4308_0 rhash 1.4.1 h3c74f83_1 rtree 1.0.1 pypi_0 pypi scikit-image 0.19.3 pypi_0 pypi scipy 1.7.3 pypi_0 pypi setuptools 65.6.3 py37h06a4308_0 simple-websocket 0.10.1 pypi_0 pypi six 1.16.0 pyhd3eb1b0_1 smplx 0.1.28 pypi_0 pypi sqlite 3.41.2 h5eee18b_0 tabulate 0.9.0 pyhd8ed1ab_1 conda-forge tensorrt 8.6.1 pypi_0 pypi termcolor 2.3.0 pyhd8ed1ab_0 conda-forge terminaltables 3.1.10 pypi_0 pypi tifffile 2021.11.2 pypi_0 pypi tk 8.6.12 h1ccaba5_0 toml 0.10.2 pypi_0 pypi tomli 2.0.1 pypi_0 pypi torchaudio 0.12.1 py37_cu113 pytorch torchvision 0.13.1 py37_cu113 pytorch tqdm 4.66.1 pyhd8ed1ab_0 conda-forge trimesh 3.23.1 pypi_0 pypi typing-extensions 4.7.1 pypi_0 pypi typing_extensions 4.4.0 py37h06a4308_0 urllib3 1.26.14 py37h06a4308_0 vedo 2023.4.6 pypi_0 pypi virtualenv 20.24.3 pypi_0 pypi vtk 9.2.6 pypi_0 pypi wcwidth 0.2.6 pypi_0 pypi werkzeug 2.2.3 pypi_0 pypi wheel 0.38.4 py37h06a4308_0 wrapt 1.15.0 pypi_0 pypi wsproto 1.2.0 pypi_0 pypi x264 1!157.20191217 h7b6447c_0 xrmocap 0.8.0 dev_0 <develop> xrprimer 0.7.0 pypi_0 pypi xtcocotools 1.13 pypi_0 pypi xz 5.4.2 h5eee18b_0 yacs 0.1.8 pyhd8ed1ab_0 conda-forge yaml 0.2.5 h7f98852_2 conda-forge yapf 0.40.1 pypi_0 pypi zipp 3.15.0 pypi_0 pypi zlib 1.2.13 h5eee18b_0 zstd 1.5.5 hc292b87_0
创建xrmocap_data文件夹 并设置为以下结构(这部分和mmHuman3D数据一致)
使用脚本下载感知模型,包括检测、二维姿态估计、跟踪和 CamStyle 模型。
sh scripts/download_weight.sh
下载完成后 会自动创建weight文件夹 结构如下:
做法:
从此处下载 HuMMan 数据集中原始 .smc 文件。将 .smc 文件放置在 xrmocap_data/humman/ 中(很大 19g)。执行下列语句:
python tools/process_smc.py \
--estimator_config configs/humman_mocap/mview_sperson_smpl_estimator.py \
--smc_path xrmocap_data/humman/p000455_a000986.smc \
--output_dir xrmocap_data/humman/p000455_a000986_output \
--visualize
本例依赖于快速演示的小型测试数据集Shelf_50.zip。 它包含来自 Shelf 序列的 50 个帧,以及 5 个经过校准和同步的摄像机视图。
基于优化的方法利用 2D 关键点之间的关联并通过三角测量或其他方法生成 3D 关键点。 以MVPose为例,执行以下指令:
# 准备数据集Shelf_50.zip
wget https://openxrlab-share-mainland.oss-cn-hangzhou.aliyuncs.com/xrmocap/example_resources/Shelf_50.zip -P xrmocap_data
cd xrmocap_data
unzip -q Shelf_50.zip
rm Shelf_50.zip
cd ..
现在xrmocap_data的结构应是这样(篇幅限制显示不完整)
修改tools/mview_mperson_topdown_estimator.py
新建文件夹output/estimation/kps3d
运行demo程序
python tools/mview_mperson_topdown_estimator.py \
--estimator_config 'configs/mvpose_tracking/mview_mperson_topdown_estimator.py' \
--image_and_camera_param 'xrmocap_data/Shelf_50/image_and_camera_param.txt' \
--start_frame 300 \
--end_frame 350 \
--output_dir 'output/estimation' \
--enable_log_file
输出文件夹中kps3d和smpl分别存储多视角估计视频
txt是日志,存储一些评估指标
npz是两个人的空间观测点
结果可视化
共五个视角,视频如下:
视角0
视角1
视角2
视角3
视角4
注意:之前因为版本更新visualize_keypoints3d_projected
这个API参数设置不当一直会出现以下错误: 此问题的解决方法是本人向官方提出issue 多谢官方回复啦
基于学习的方法采用端到端的学习方案,因此需要在推理之前进行训练。 以多视图姿势变换器(MvP)为例,可以下载预训练的 MvP 模型并在 Shelf_50 上运行它,如下所示:
sh scripts/download_install_deformable.sh
在conda list中出现deformable1.0即为安装成功:
# 准备数据集Shelf_50.zip(这部分和之前配置基于优化的MVpose例子中操作相同 如果执行过可以跳过)
wget https://openxrlab-share-mainland.oss-cn-hangzhou.aliyuncs.com/xrmocap/example_resources/Shelf_50.zip -P xrmocap_data
cd xrmocap_data
unzip -q Shelf_50.zip
rm Shelf_50.zip
cd ..
# 下载预训练模型
mkdir -p weight/mvp
wget https://openxrlab-share-mainland.oss-cn-hangzhou.aliyuncs.com/xrmocap/weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth -P weight/mvp
此时weight有以下结构:
# Evaluation
sh ./scripts/eval_mvp.sh 1 configs/mvp/shelf_config/mvp_shelf_50.py weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth
如果一切配置正常,可以在终端中看到评估结果。
sh ./scripts/eval_mvp.sh 1 configs/mvp/shelf_config/mvp_shelf_50.py weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth + GPUS_PER_NODE=1 + CFG_FILE=configs/mvp/shelf_config/mvp_shelf_50.py + MODEL_PATH=weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth + python -m torch.distributed.launch --nproc_per_node=1 --master_port 65530 --use_env tools/eval_model.py --cfg configs/mvp/shelf_config/mvp_shelf_50.py --model_path weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth /home/sqy/anaconda3/envs/XRmocap/lib/python3.7/site-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects `--local_rank` argument to be set, please change it to read from `os.environ['LOCAL_RANK']` instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions FutureWarning, 2023-09-07 10:11:32,650 - mvp_eval - INFO - | distributed init (rank 0):env:// 2023-09-07 10:11:32,650 - torch.distributed.distributed_c10d - INFO - Added key: store_based_barrier_key:1 to store for rank: 0 2023-09-07 10:11:32,650 - torch.distributed.distributed_c10d - INFO - Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 1 nodes. 2023-09-07 10:11:33,618 - mvp_eval - INFO - Namespace(cfg='configs/mvp/shelf_config/mvp_shelf_50.py', device='cuda', dist_url='env://', model_path='weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth', seed=42, weight_decay=0.0001, world_size=1) 2023-09-07 10:11:33,618 - mvp_eval - INFO - {'type': 'MVPTrainer', 'logger': <Logger mvp_eval (INFO)>, 'device': 'cuda', 'seed': 42, 'distributed': True, 'model_path': 'weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth', 'gpu_idx': 0, 'final_output_dir': '/home/sqy/XRmocap/xrmocap/xrmocap/output/shelf/multi_view_pose_transformer_50/mvp_shelf_50_20230907101132', 'workers': 1, 'train_dataset': 'shelf', 'test_dataset': 'shelf', 'test_batch_size': 1, 'print_freq': 100, 'cudnn_setup': {'benchmark': True, 'deterministic': False, 'enable': True}, 'dataset_setup': {'test_dataset_setup': {'type': 'MVPDataset', 'test_mode': True, 'meta_path': './xrmocap_data/Shelf_50/xrmocap_meta_testset_small/'}, 'base_dataset_setup': {'type': 'MVPDataset', 'dataset': 'shelf', 'data_root': './xrmocap_data/Shelf_50/Shelf/', 'img_pipeline': [{'type': 'LoadImageCV2'}, {'type': 'BGR2RGB'}, {'type': 'WarpAffine', 'image_size': [800, 608], 'flag': 'inter_linear'}, {'type': 'ToTensor'}, {'type': 'Normalize', 'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}], 'image_size': [800, 608], 'heatmap_size': [200, 152], 'metric_unit': 'millimeter', 'shuffled': False, 'gt_kps3d_convention': 'campus', 'cam_world2cam': True, 'n_max_person': 10, 'n_views': 5, 'n_kps': 14}}, 'mvp_setup': {'type': 'MviewPoseTransformer', 'n_kps': 15, 'n_instance': 10, 'image_size': [800, 608], 'space_size': [8000.0, 8000.0, 2000.0], 'space_center': [450.0, -320.0, 800.0], 'd_model': 256, 'use_feat_level': [0, 1, 2], 'n_cameras': 5, 'query_embed_type': 'person_kp', 'with_pose_refine': True, 'loss_weight_loss_ce': 0.0, 'loss_per_kp': 5.0, 'aux_loss': True, 'pred_conf_threshold': 0.5, 'pred_class_fuse': 'mean', 'projattn_pos_embed_mode': 'use_rayconv', 'query_adaptation': True, 'convert_kp_format_indexes': [14, 13, 12, 6, 7, 8, 11, 10, 9, 3, 4, 5, 0, 1], 'backbone_setup': {'type': 'PoseResNet', 'n_layers': 50, 'n_kps': 15, 'deconv_with_bias': False, 'n_deconv_layers': 3, 'n_deconv_filters': [256, 256, 256], 'n_deconv_kernels': [4, 4, 4], 'final_conv_kernel': 1}, 'proj_attn_setup': {'type': 'ProjAttn', 'd_model': 256, 'n_levels': 1, 'n_heads': 8, 'n_points': 4, 'projattn_pos_embed_mode': 'use_rayconv'}, 'decoder_layer_setup': {'type': 'MvPDecoderLayer', 'space_size': [8000.0, 8000.0, 2000.0], 'space_center': [450.0, -320.0, 800.0], 'image_size': [800, 608], 'd_model': 256, 'dim_feedforward': 1024, 'dropout': 0.1, 'activation': 'relu', 'n_heads': 8, 'detach_refpoints_cameraprj': True, 'fuse_view_feats': 'cat_proj', 'n_views': 5}, 'decoder_setup': {'type': 'MvPDecoder', 'n_decoder_layer': 6, 'return_intermediate': True}, 'pos_encoding_setup': {'type': 'PositionEmbeddingSine', 'normalize': True, 'temperature': 10000}, 'pose_embed_setup': {'type': 'MLP', 'd_model': 256, 'pose_embed_layer': 3}, 'matcher_setup': {'type': 'HungarianMatcher', 'match_coord': 'norm'}, 'criterion_setup': {'type': 'SetCriterion', 'image_size': [800, 608], 'n_person': 10, 'loss_kp_type': 'l1', 'focal_alpha': 0.25, 'space_size': [8000.0, 8000.0, 2000.0], 'space_center': [450.0, -320.0, 800.0], 'use_loss_pose_perprojection': True, 'loss_pose_normalize': False, 'pred_conf_threshold': 0.5}}, 'evaluation_setup': {'type': 'End2EndEvaluation', 'dataset_name': 'shelf', 'pred_kps3d_convention': 'campus', 'gt_kps3d_convention': 'campus', 'eval_kps3d_convention': 'campus', 'n_max_person': 10, 'checkpoint_select': 'pcp_total_mean', 'metric_list': [{'type': 'PredictionMatcher', 'name': 'matching'}, {'type': 'MPJPEMetric', 'name': 'mpjpe', 'unit_scale': 1}, {'type': 'PAMPJPEMetric', 'name': 'pa_mpjpe', 'unit_scale': 1}, {'type': 'PCKMetric', 'name': 'pck', 'use_pa_mpjpe': True, 'threshold': [50, 100]}, {'type': 'PCPMetric', 'name': 'pcp', 'threshold': 0.5, 'show_table': True, 'selected_limbs_names': ['left_lower_leg', 'right_lower_leg', 'left_upperarm', 'right_upperarm', 'left_forearm', 'right_forearm', 'left_thigh', 'right_thigh'], 'additional_limbs_names': [['jaw', 'headtop']]}, {'type': 'PrecisionRecallMetric', 'name': 'precision_recall', 'show_table': False, 'threshold': [25, 50, 75, 100, 125, 150, 500]}], 'pick_dict': {'mpjpe': ['mpjpe_mean', 'mpjpe_std'], 'pa_mpjpe': ['pa_mpjpe_mean', 'pa_mpjpe_std'], 'pck': ['pck@50', 'pck@100'], 'pcp': ['pcp_total_mean'], 'precision_recall': ['recall@500']}}} 2023-09-07 10:11:33,618 - mvp_eval - INFO - Loading data .. 2023-09-07 10:11:33,627 - mvp_eval - INFO - Constructing models .. 2023-09-07 10:11:33,890 - mvp_eval - INFO - Load saved models state weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth 2023-09-07 10:11:33,891 - mvp_eval - INFO - load checkpoint from local path: weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth /home/sqy/XRmocap/xrmocap/xrmocap/xrmocap/data/dataset/mvp_dataset.py:207: UserWarning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /opt/conda/conda-bld/pytorch_1659484809535/work/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1581.) person_vis[np.logical_not(check)] = 0 /home/sqy/anaconda3/envs/XRmocap/lib/python3.7/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1659484809535/work/aten/src/ATen/native/TensorShape.cpp:2894.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] /home/sqy/XRmocap/xrmocap/xrmocap/xrmocap/utils/camera_utils.py:74: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). device=device).view(batch_size, n_views, 2, 1) /home/sqy/anaconda3/envs/XRmocap/lib/python3.7/site-packages/torch/nn/functional.py:4216: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. "Default grid_sample and affine_grid behavior has changed " 2023-09-07 10:11:37,662 - mvp_eval - INFO - Test: [0/51] Time: 3.671s (3.671s) Speed: 1.4 samples/s Data: 0.289s (0.289s) Memory 351020544.0 2023-09-07 10:11:51,867 - mvp_eval - INFO - Test: [50/51] Time: 0.280s (0.351s) Speed: 17.9 samples/s Data: 0.000s (0.006s) Memory 351020544.0 2023-09-07 10:11:51,906 - mvp_eval - INFO - Left_Shoulder_To_Right_Shoulder is not selected! 2023-09-07 10:11:51,906 - mvp_eval - INFO - Left_Shoulder_To_Left_Hip_Extra is not selected! 2023-09-07 10:11:51,906 - mvp_eval - INFO - Right_Shoulder_To_Right_Hip_Extra is not selected! 2023-09-07 10:11:51,906 - mvp_eval - INFO - Right_Hip_Extra_To_Left_Hip_Extra is not selected! /home/sqy/XRmocap/xrmocap/xrmocap/xrmocap/core/evaluation/metrics/pcp_metric.py:211: RuntimeWarning: invalid value encountered in true_divide np.abs(check_result), axis=(0, 2)) /home/sqy/XRmocap/xrmocap/xrmocap/xrmocap/core/evaluation/metrics/pcp_metric.py:221: RuntimeWarning: invalid value encountered in true_divide np.abs(check_result[:, :, v]), axis=(0, 2)) 2023-09-07 10:11:51,921 - mvp_eval - INFO - Detailed table for PCPMetric +-----------------+---------+---------+---------+---------+ | Bone Group | Actor 0 | Actor 1 | Actor 2 | Average | +-----------------+---------+---------+---------+---------+ | left_lower_leg | 100.00 | nan | 100.00 | nan | | right_lower_leg | 100.00 | nan | 100.00 | nan | | left_upperarm | 100.00 | nan | 100.00 | nan | | right_upperarm | 100.00 | nan | 74.51 | nan | | left_forearm | 100.00 | nan | 100.00 | nan | | right_forearm | 100.00 | nan | 74.51 | nan | | left_thigh | 100.00 | nan | 100.00 | nan | | right_thigh | 100.00 | nan | 100.00 | nan | | jaw-headtop | 96.08 | nan | 100.00 | nan | | total | 99.56 | nan | 94.34 | nan | +-----------------+---------+---------+---------+---------+ 2023-09-07 10:11:51,932 - mvp_eval - INFO - +------------------------------+--------+ | Metric name | Value | +------------------------------+--------+ | mpjpe: mpjpe_mean | 45.73 | | mpjpe: mpjpe_std | 26.99 | | pa_mpjpe: pa_mpjpe_mean | 35.65 | | pa_mpjpe: pa_mpjpe_std | 20.55 | | pcp: pcp_total_mean | nan | | pck: pck@50 | 80.46 | | pck: pck@100 | 98.74 | | precision_recall: recall@500 | 100.00 | +------------------------------+--------+ 2023-09-07 10:11:51,932 - mvp_eval - INFO - Saving 3D keypoints to: /home/sqy/XRmocap/xrmocap/xrmocap/output/shelf/multi_view_pose_transformer_50/mvp_shelf_50_20230907101132/pred_keypoints3d.npz 2023-09-07 10:11:51,945 - mvp_eval - INFO - Saving 3D keypoints to: /home/sqy/XRmocap/xrmocap/xrmocap/output/shelf/multi_view_pose_transformer_50/mvp_shelf_50_20230907101132/gt_keypoints3d.npz
python tools/mview_mperson_end2end_estimator.py \
--output_dir ./output/estimation \
--model_dir weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth \
--estimator_config configs/modules/core/estimation/mview_mperson_end2end_estimator.py \
--image_and_camera_param ./xrmocap_data/Shelf_50/image_and_camera_param.txt \
--start_frame 300 \
--end_frame 350 \
--enable_log_file
再次显示CUDA内存不足:CUDA out of memory
这种问题 网上有很多种解决方法 最常用的无疑是两种:改image_size和batch_size
但是,在xrmocap/core/estimation/mview_mperson_end2end_estimator.py 可以发现 batch_size已经是1了,
并且Multi-view Multi-person End-to-end Estimator.md的 estimator_config也有以下说明:
When inferring images stored on disk, set load_batch_size to a reasonable value will prevent your machine from out of memory, for MvP only batch_size=1 is supported
所以我们尝试修改configs/modules/core/estimation/mview_mperson_end2end_estimator.py 将**__image_size__修改为原来的一半[400,304]**
再次运行可视化指令,成功运行:
可视化结果:
视角0
视角1
视角2
视角3
视角4
最后附加本地xrmocap所有文件压缩包:本文的XRmocap代码及附加文件
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。