当前位置:   article > 正文

aigc踩坑

aigc踩坑

1 stable-diffusion-webui  kohya_ss  都可以容器化部署

2  对 CUDA 和cudnn8 有版本的要求 建议使用官方版本的

1  安装docker (支持英伟达驱动的镜像)
apt-get  update
 curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
 distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
 curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
 apt-get update
 apt-get install -y nvidia-docker2
 systemctl restart docker
2  导入镜像(不同的应用对CUDA的需求不同)
docker pull nvcr.io/nvidia/cuda:11.8.0-devel-ubuntu22.04
docker pull nvcr.io/nvidia/cuda:12.1.1-cudnn8-runtime-ubuntu22.04    

3 必须要连接外网

  1  sd 相对好糊弄吧需要下载的模型拷贝到相关目录基本可以使用 SD也需要下载额外的模型

  2  ss 启动和开始训练模型的时候需要访问外网 所有 必须能连接外网

 3    使用代理的时候需要 设置 export no_proxy="localhost, 127.0.0.1, ::1"

3 训练lore   

  1 使用SD给图片打标签

   1 原始图片(可以自动适配大小)最好是PNG 1  JPG的原始图片自动输出PNG(注意图片大小会很大)

   2 最好使用ffmpeg转成同一的分辨率大小

   3  kohya_ss 默认 512*512  需要改成你的图片大小 否则只能使用默认的大小部分(部分图片)

   4  多个容器可以使用同一个显卡(显卡A100) 

   5  重启容器可以解决资源不足的问题

         

########################################

https://github.com/Stability-AI/generative-models


 

 安装步骤按照官方文档

启动的时候需要下面的命令(需要初始化环境)

PYTHONPATH=. streamlit run scripts/demo/video_sampling.py --server.port 7860

#######################################

it should be used this way (linux):

To set up and use the Stable Video Diffusion XT model (stable-video-diffusion-img2vid-xt) from Stability AI, you can follow these steps:

Prerequisites:

The setup is confirmed to work on Ubuntu 22.04.3 LTS with Python version 3.10.12.
An NVIDIA GPU is required.
Ensure sufficient storage space as model files are around 10GB each.
Clone the Generative Models Repository:

Clone Stability AI's generative-models repository and navigate to it:

git clone GitHub - Stability-AI/generative-models: Generative Models by Stability AI
cd generative-models
Set Up Python Environment:

Install python3.10-venv and PyTorch 2.0:

sudo apt install python3.10-venv
python3 -m venv .pt2
source .pt2/bin/activate
pip3 install -r requirements/pt2.txt
pip3 install .
Modify the Streamlit Helpers File (optional):

Edit the file scripts/demo/streamlit_helpers.py and set lowvram_mode to True if you have limited VRAM.
Download Model Weights:

Create a checkpoints directory and download the required model files from Hugging Face into this directory:

mkdir checkpoints
wget -O ./checkpoints/svd_xt.safetensors 'https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/resolve/main/svd_xt.safetensors?download=true'
wget -O ./checkpoints/svd_xt_image_decoder.safetensors 'https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/resolve/main/svd_xt_image_decoder.safetensors?download=true'
Running the Demo Script:

Run the demo script using Streamlit:

PYTHONPATH=. ./.pt2/bin/streamlit run scripts/demo/video_sampling.py
Select the desired model version, upload your image, and adjust settings as needed. Finally, click 'Sample' to start generating the video.
Output:

The generated video will be saved in the ./outputs/demo/vid/svd/samples/ directory. You can set up a temporary HTTP server to access the files easily using Docker:

sudo docker run -p 80:80 -v ./outputs/demo/vid/svd/samples:/usr/local/apache

注意 图片需要 1024:576   否则自动给你裁剪了 也就是比较喜欢横屏的

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/笔触狂放9/article/detail/638936
推荐阅读
相关标签
  

闽ICP备14008679号