赞
踩
如果对库和扩散模型不熟悉,可能很难知道要使用哪个管道来完成任务。例如,如果使用**runwayml/stable-diffusion-v1-5检查点进行文本到图像,也可以通过使用以下命令加载检查点来将其用于图像到图像和修复:分别是StableDiffusionImg2ImgPipeline和StableDiffusionInpaintPipeline**类。
在引擎盖下,AutoPipelineForText2Image:
"stable-diffusion"**[model_index.json](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json)**
中检测类"stable-diffusion"
同样,对于图像到图像,AutoPipelineForImage2Image"stable-diffusion"
会从文件中检测到检查点model_index.json
,并会在幕后加载相应的**StableDiffusionImg2ImgPipeline 。**您还可以传递特定于管道类的任何其他参数,例如strength
,它确定添加到输入图像的噪声或变化量:
from diffusers import AutoPipelineForImage2Image import torch import requests from PIL import Image from io import BytesIO pipeline = AutoPipelineForImage2Image.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, ).to("cuda") prompt = "a portrait of a dog wearing a pearl earring" url = "https://upload.wikimedia.org/wikipedia/commons/thumb/0/0f/1665_Girl_with_a_Pearl_Earring.jpg/800px-1665_Girl_with_a_Pearl_Earring.jpg" response = requests.get(url) image = Image.open(BytesIO(response.content)).convert("RGB") image.thumbnail((768, 768)) image = pipeline(prompt, image, num_inference_steps=200, strength=0.75, guidance_scale=10.5).images[0] image
如果您想进行修复,则**AutoPipelineForInpainting以相同的方式加载底层的StableDiffusionInpaintPipeline类:**
from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image import torch pipeline = AutoPipelineForInpainting.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True ).to("cuda") img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" init_image = load_image(img_url).convert("RGB") mask_image = load_image(mask_url).convert("RGB") prompt = "A majestic tiger sitting on a bench" image = pipeline(prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80).images[0] image
对于某些工作流程或者如果您要加载许多管道,从检查点重用相同的组件会更节省内存,而不是重新加载它们,这会不必要地消耗额外的内存。例如,如果正在使用文本到图像的检查点,并且想要再次将其用于图像到图像,请使用 from_pipe **[()](https://huggingface.co/docs/diffusers/v0.26.3/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForImage2Image.from_pipe)**
方法。此方法从先前加载的管道的组件创建新管道,无需额外的内存成本。
from_pipe **()**方法检测原始管道类并将其映射到与您想要执行的任务相对应的新管道类。例如,如果您加载"stable-diffusion"
文本到图像的类管道:
from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image
import torch
pipeline_text2img = AutoPipelineForText2Image.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
)
print(type(pipeline_text2img))
"<class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'>"
然后**from_pipe()将原始"stable-diffusion"
管道类映射到StableDiffusionImg2ImgPipeline**:
pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img)
print(type(pipeline_img2img))
"<class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline'>"
如果您将可选参数(例如禁用安全检查器)传递给原始管道,则该参数也会传递给新管道:
from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image
import torch
pipeline_text2img = AutoPipelineForText2Image.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
use_safetensors=True,
requires_safety_checker=False,
).to("cuda")
pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img)
print(pipeline_img2img.config.requires_safety_checker)
"False"
如果您想更改新管道的行为,您可以覆盖原始管道中的任何参数甚至配置。例如,要重新打开安全检查器并添加参数strength
:
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。