赞
踩
git clone https://github.com/IDEA-Research/Grounded-Segment-Anything.git
下载了Grounded-Segment-Anything模型后会发现以下两个文件夹是空的,因此需要另行下载,可以自己手动去官网(VISAM、grounded-sam-osx)下载压缩包解压,并保存在Grounded-Segment-Anything文件夹中所对应的位置。
模型权重下载:将以下模型权重下载后放在Grounded-Segment-Anything的文件夹下
groundingdino_swint_ogc.pth
sam_vit_h_4b8939.pth
conda create -n env_grounded_segment_anything python==3.8.10
conda activate env_grounded_segment_anything
pip install torch==2.1.0+cu121 torchvision==0.16.0+cu121 torchaudio==2.1.0 -f https://download.pytorch.org/whl/torch_stable.html
export AM_I_DOCKER=False
export BUILD_WITH_CUDA=True
export CUDA_HOME=/path/to/cuda-11.3
pip install -r requirements.txt
python -m pip install -e segment_anything
python -m pip install -e GroundingDINO
python setup.py build
python setup.py install
pip install --upgrade diffusers[torch]
pip install opencv-python pycocotools matplotlib onnxruntime onnx ipykernel
pip install --upgrade transformers
python grounding_dino_demo.py
运行结果:(输出一个annotated_image.jpg)
python grounded_sam_demo.py --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py --grounded_checkpoint groundingdino_swint_ogc.pth --sam_checkpoint sam_vit_h_4b8939.pth --input_image assets/demo1.jpg --output_dir "outputs" --box_threshold 0.3 --text_threshold 0.25 --text_prompt "bear" --device "cuda"
运行结果:生成一个output文件夹,文件夹里包含grouded_sam_output.jpg(显示生成框图片)、mask.jpg(mask掩码图片)、raw_image.jpg(原始图片)和mask.json:
grouded_sam_output.jpg(显示生成框图片):
mask.jpg(mask掩码图片):
raw_image.jpg(原始图片):
python grounded_sam_inpainting_demo.py \
--config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
--grounded_checkpoint groundingdino_swint_ogc.pth \
--sam_checkpoint sam_vit_h_4b8939.pth \
--input_image assets/inpaint_demo.jpg \
--output_dir "outputs" \
--box_threshold 0.3 \
--text_threshold 0.25 \
--det_prompt "bench" \
--inpaint_prompt "A sofa, high quality, detailed" \
--device "cuda"
运行结果:生成一个output文件夹,文件夹里包含grouded_sam_output.jpg(显示生成框图片)、grounded_sam_inpainting_output.jpg(修复后图片)和raw_image.jpg(原始图片):
grounded_sam_output.jpg(显示生成框图片):
grounded_sam_inpainting_output.jpg:
raw_image.jpg(原始图片):
这个问题困扰了好几天,最终得到的解决方案:
由于我用的学校服务器,没有root权限,很多命令比如说sudo等就用不了,因此只能采用以下方式下载gcc9.4.0,实际上可以直接通过sudo等命令直接下载。
gcc9.4.0下载方式:
conda install https://anaconda.org/brown-data-science/gcc/9.4.0/download/linux-64/gcc-9.4.0-0.tar.bz2
tar -xzf gcc-9.4.0.tar.gz
cd gcc-9.4.0
mkdir gcc-9.4.0-build
cd gcc-9.4.0-build
../configure --disable-checking --enable-languages=c,c++,fortran --disable-multilib --prefix=/path/to/install/gcc-9.4 --enable-threads=posix
make
make install
输入以下代码进入bashrc的编辑模式
vim ~/.bashrc
按下a键后进入insert模式(进入insert模式后才能对文件内容进行修改),然后在该界面的最后几行复制粘贴以下代码,其中path就是之前设置的gcc-9.4.0-build的地址
export PATH=/public/home/mcao/usr/xy/source/gcc/bin:$PATH
export LD_LIBRARY_PATH=/public/home/mcao/usr/xy/source/gcc/lib/:/public/home/mcao/usr/xy/source/gcc/lib64/:$LD_LIBRARY_PATH
输入完成后按ESC键,然后输入 :wq 后回车退出界面
最后输入以下代码使得配置的环境在这个终端生效
source ~/.bashrc
最后可以检查一下gcc版本:
gcc -v
OSError: We couldn’t connect to ‘https://huggingface.co’ to load this file, couldn’t find it in the cached files and it looks like bert-base-uncased is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at ‘https://huggingface.co/docs/transformers/installation#offline-mode’.
解决方法一:(看到有人解决方式是设立代理服务器,但是我没成功)
import os
os.environ['HTTP_PROXY'] = 'http://127.0.0.1:7890'
os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:7890'
解决方式二:手动下载模型到本地库,然后改代码
OSError: Cannot load model runwayml/stable-diffusion-inpainting: model is not cached locally and an error occured while trying to fetch metadata from the Hub. Please check out the root cause in the stacktrace above.
解决方式和上边一样:手动下载模型到本地库,然后改代码
先把stable-diffusion-inpainting模型下载到Grounded-Segment-Anything目录中,然后修改grounded_sam_inpainting_demo.py中的代码,具体修改方式是将下图中第204行-206行被注释掉的代码修改为207行-209行的代码。
参考:
Grounded-Segment-Anything本地部署
语义分割——Grounded Segment Anything 环境配置和使用教程(已解决 name ‘_C’ is not defined 报错)
Grounded-Segment-Anything环境安装踩坑记录
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。