赞
踩
1 sd 相对好糊弄吧需要下载的模型拷贝到相关目录基本可以使用 SD也需要下载额外的模型
2 ss 启动和开始训练模型的时候需要访问外网 所有 必须能连接外网
3 使用代理的时候需要 设置 export no_proxy="localhost, 127.0.0.1, ::1"
1 使用SD给图片打标签
1 原始图片(可以自动适配大小)最好是PNG 1 JPG的原始图片自动输出PNG(注意图片大小会很大)
2 最好使用ffmpeg转成同一的分辨率大小
3 kohya_ss 默认 512*512 需要改成你的图片大小 否则只能使用默认的大小部分(部分图片)
4 多个容器可以使用同一个显卡(显卡A100)
5 重启容器可以解决资源不足的问题
########################################
https://github.com/Stability-AI/generative-models
安装步骤按照官方文档
启动的时候需要下面的命令(需要初始化环境)
PYTHONPATH=. streamlit run scripts/demo/video_sampling.py --server.port 7860
#######################################
it should be used this way (linux):
To set up and use the Stable Video Diffusion XT model (stable-video-diffusion-img2vid-xt) from Stability AI, you can follow these steps:
Prerequisites:
The setup is confirmed to work on Ubuntu 22.04.3 LTS with Python version 3.10.12.
An NVIDIA GPU is required.
Ensure sufficient storage space as model files are around 10GB each.
Clone the Generative Models Repository:
Clone Stability AI's generative-models repository and navigate to it:
git clone GitHub - Stability-AI/generative-models: Generative Models by Stability AI
cd generative-models
Set Up Python Environment:
Install python3.10-venv and PyTorch 2.0:
sudo apt install python3.10-venv
python3 -m venv .pt2
source .pt2/bin/activate
pip3 install -r requirements/pt2.txt
pip3 install .
Modify the Streamlit Helpers File (optional):
Edit the file scripts/demo/streamlit_helpers.py and set lowvram_mode to True if you have limited VRAM.
Download Model Weights:
Create a checkpoints directory and download the required model files from Hugging Face into this directory:
mkdir checkpoints
wget -O ./checkpoints/svd_xt.safetensors 'https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/resolve/main/svd_xt.safetensors?download=true'
wget -O ./checkpoints/svd_xt_image_decoder.safetensors 'https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/resolve/main/svd_xt_image_decoder.safetensors?download=true'
Running the Demo Script:
Run the demo script using Streamlit:
PYTHONPATH=. ./.pt2/bin/streamlit run scripts/demo/video_sampling.py
Select the desired model version, upload your image, and adjust settings as needed. Finally, click 'Sample' to start generating the video.
Output:
The generated video will be saved in the ./outputs/demo/vid/svd/samples/ directory. You can set up a temporary HTTP server to access the files easily using Docker:
sudo docker run -p 80:80 -v ./outputs/demo/vid/svd/samples:/usr/local/apache
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。