赞
踩
SAM部署
1.检查所需的环境:`python>=3.8``pytorch>=1.7``torchvision>=0.8`
2.安装Anaconda:
1.先去官网找到对应的下载地址,然后在终端执行
wget https://repo.anaconda.com/archive/Anaconda3-5.3.1-Linux-x86_64.sh
2.运行.sh文件
bash Anaconda3-5.3.1-Linux-x86_64.sh
3.pytorch的下载
1.使用conda创建一个虚拟环境(python)=3.8
conda create -n envName_pytorch python=3.8
2.进入虚拟环境
source activate envName_pytorch
3.conda下载pytorch
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
4.进行检验
python ,import torch ,torch.rand(3,3)
但是输入import torch报错
Traceback (most recent call last):
File "`<stdin>`", line 1, in `<module>`
ModuleNotFoundError: No module named 'torch'
解决方法:重新下载torch
conda install pytorch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 pytorch-cuda=11.8 -c pytorch -c nvidia
但是无法下载:Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
解决方法:重新去https://anaconda.org 这个网址,在上方的搜索条上搜索你要安装这个包的其他channel
5.更新依赖库
进行换源失败,换回conda源: conda config --remove-key channels
4.下载SAM:
1.从github仓库本地克隆存储库:git clone https : // github.com / facebookresearch / segment-anything.git
拉取失败:fatal: unable to access 'https://github.com/facebookresearch/segment-anything.git/': GnuTLS recv error (-110): The TLS connection was non-properly terminated.
解决方案:git config --global --unset https.proxy
2.安装SAM:
cd segment-anything
pip install -e .
3.安装其他依赖
pip install opencv-python pycocotools matplotlib onnxruntime onnx
4.下载model checkpoints:[https://github.com/facebookresearch/segment-anything#model-checkpoints](https://github.com/facebookresearch/segment-anything#model-checkpoints "https://github.com/facebookresearch/segment-anything#model-checkpoints")
5.运行SAM进行模型训练
1. python scripts/amg.py --checkpoint /home/xtyg/workspace-gm/segment-anything/notebooks/sam_vit_h_4b8939.pth --model-type vit_h --input /home/xtyg/workspace-gm/segment-anything/test/input --output /hpme/xtyg/workspace-gm/segment-anything/test/output
但报错:raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
解决方案:卸载torch:conda uninstall libtorch
2.解决后,再次运行出现新的报错
Traceback (most recent call last):
File "scripts/amg.py", line 7, in `<module>`
import cv2 # type: ignore
ModuleNotFoundError: No module named 'cv2'
解决方案:使用pip安装cv2:pip install[opencv](https://so.csdn.net/so/search?q=opencv&spm=1001.2101.3001.7020)-python
使用python import cv2检验
3.解决后运行模型出现:
ModuleNotFoundError: No module named 'segment_anything'
解决方案,重新pip下载SAM
4.运行模型分割的代码!!!!!!!!!!!!!!生成掩码
python scripts/amg.py --checkpoint /home/xtyg/workspace-gm/segment-anything1/sam_vit_h_4b8939.pth --model-type vit_h --input /home/xtyg/workspace-gm/segment-anything1/test/input --output /home/xtyg/workspace-gm/segment-anything1/test/output
成功实现分割!
5.扣取图像中的所有物体
1.导入依赖库和函数
from segment_anything import sam_model_registry, SamAutomaticMaskGenerator, SamPredictor
import cv2
import matplotlib.pyplot as plt
import numpy as np
def show_anns(anns):
if len(anns) == 0:
return
sorted_anns = sorted(anns, key=(lambda x: x['area']), reverse=True)
ax = plt.gca()
ax.set_autoscale_on(False)
polygons = []
color = []
for ann in sorted_anns:
m = ann['segmentation']
img = np.ones((m.shape[0], m.shape[1], 3))
color_mask = np.random.random((1, 3)).tolist()[0]
for i in range(3):
img[:,:,i] = color_mask[i]
ax.imshow(np.dstack((img, m*0.35)))
2.读取图片
image = cv2.imread(r"F:\Dataset\Tomato_Appearance\Tomato_Xishi\images\xs_1.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
3.实例化模型:
sam_checkpoint = "F:\sam_vit_h_4b8939.pth"
model_type = "default"
device = "cuda"
sam = sam_model_registry[model_type](checkpoint=sam_checkpoint)
sam.to(device=device)
4.分割并展示
mask_generator = SamAutomaticMaskGenerator(sam)
masks = mask_generator.generate(image)
plt.figure(figsize=(20,20))
plt.imshow(image)
show_anns(masks)
plt.axis('off')
plt.show()
6.退出虚拟环境:conda deactivate
7.ONNX导出:
1.下载ONNX模块:pip install onnx
2.python scripts/export_onnx_model.py --checkpoint /home/xtyg/workspace-gm/segment-anything1/sam_vit_h_4b8939.pth --model-type vit_h --output /home/xtyg/workspace-gm/segment-anything1/onnxout.onnx
使用上面命令进行导出
8.web的演示
1.下载yarn:
1.导入软件源的 GPG key 并且添加 Yarn APT 软件源到你的系统
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
2.下载yarn:sudo apt install yarn
启动yarn报错:ERROR: There are no scenarios; must have at least one.”
解决方案:sudo apt remove cmdtest
sudo apt remove yarn
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt update
sudo apt install yarn
3.进入demo :cd demo
运行:yarn && yarn start
报错:error npyjs@0.4.0: The engine "node" is incompatible with this module. Expected version "^12.20.0 || >=14.13.1". Got "10.19.0"
error Found incompatible module.
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
解决方案:yarn config set ignore-engines true
运行yarn && yarn start
报错:Read error (Connection reset by peer) in headers.
解决方案:yarn依赖包没有安装完全,yarn没有真正搭建运行起来
卸载:sudo npm uninstall yarn -g
重新安装:sudo npm install -g yarn
依次输入以下命令
yarn -v
yarn config get registry
yarn config set registry https://registry.npm.taobao.org
yarn global add
yarn global list
yarn init
yarn list
安装好后继续报错:info node: --openssl-legacy-provider is not allowed in NODE_OPTIONS
解决方案:重新安装nodejs
2. 运行yarn start显示ERROR in unable to locate '/home/xtyg/workspace-gm/segment-anything1/demo/model' glob
解决方案:上述的原因是模型没有导出,无法在该路径下找到onnx模型
1.首先导出onnx的模型
import torch
import numpy as np
import cv2
import matplotlib.pyplot as plt
from segment_anything import sam_model_registry, SamPredictor
from segment_anything.utils.onnx import SamOnnxModel
import onnxruntime
from onnxruntime.quantization import QuantType
from onnxruntime.quantization.quantize import quantize_dynamic
# 我本地存在checkpoints/sam_vit_h_4b8939.pth
checkpoint = "sam_vit_h_4b8939.pth"
model_type = "vit_h"
sam = sam_model_registry[model_type](checkpoint=checkpoint)
onnx_model_path = None # Set to use an already exported model, then skip to the next section.
import warnings
onnx_model_path = "sam_onnx_example.onnx"
onnx_model = SamOnnxModel(sam, return_single_mask=True)
dynamic_axes = {
"point_coords": {1: "num_points"},
"point_labels": {1: "num_points"},
}
embed_dim = sam.prompt_encoder.embed_dim
embed_size = sam.prompt_encoder.image_embedding_size
mask_input_size = [4 * x for x in embed_size]
dummy_inputs = {
"image_embeddings": torch.randn(1, embed_dim, *embed_size, dtype=torch.float),
"point_coords": torch.randint(low=0, high=1024, size=(1, 5, 2), dtype=torch.float),
"point_labels": torch.randint(low=0, high=4, size=(1, 5), dtype=torch.float),
"mask_input": torch.randn(1, 1, *mask_input_size, dtype=torch.float),
"has_mask_input": torch.tensor([1], dtype=torch.float),
"orig_im_size": torch.tensor([1500, 2250], dtype=torch.float),
}
output_names = ["masks", "iou_predictions", "low_res_masks"]
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=torch.jit.TracerWarning)
warnings.filterwarnings("ignore", category=UserWarning)
# 这里导出了sam_onnx_example.onnx
with open(onnx_model_path, "wb") as f:
torch.onnx.export(
onnx_model,
tuple(dummy_inputs.values()),
f,
export_params=True,
verbose=False,
opset_version=17,
do_constant_folding=True,
input_names=list(dummy_inputs.keys()),
output_names=output_names,
dynamic_axes=dynamic_axes,
)
onnx_model_quantized_path = "sam_onnx_quantized_example.onnx"
quantize_dynamic(
model_input=onnx_model_path,
model_output=onnx_model_quantized_path,
per_channel=False,
reduce_range=False,
weight_type=QuantType.QUInt8,
)
onnx_model_path = onnx_model_quantized_path
2.通过上面的代码会生成sam_onnx_example.onnx,和sam_onnx_quantized_example.onnx模型
3.运行下面的代码对jpg图像进行编码,输出.npy的编码文件
checkpoint = "../checkpoints/sam_vit_h_4b8939.pth"
model_type = "vit_h"
sam = sam_model_registry[model_type](checkpoint=checkpoint)
sam.to(device='cuda')
predictor = SamPredictor(sam)
image = cv2.imread('../demo/src/assets/data/dogs.jpg')
predictor.set_image(image)
image_embedding = predictor.get_image_embedding().cpu().numpy()
np.save("dogs_embedding.npy", image_embedding)
type(image_embedding),image_embedding.shape
4.按照demo/src/App.tsx规定的路径放置文件:
const IMAGE_PATH = "/assets/data/dogs.jpg";
const IMAGE_EMBEDDING = "/assets/data/dogs_embedding.npy";
const MODEL_DIR = "/model/sam_onnx_quantized_example.onnx";
3.接下来进入 demo :cd demo
运行yarn && yarn starts
在电脑的浏览器上输入http://192.168.1.183:8080/可以成功连上服务器
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。