赞
踩
https://github.com/Stability-AI/StableDiffusion
安装stablediffusion后,运行报错,运行命令如下:
python scripts/txt2img.py --prompt “a professional photograph of an astronaut riding a horse” --ckpt D:\spyderData\pth\v2-1_768-ema-pruned.ckpt --config configs/stable-diffusion/v2-inference-v.yaml
错误如下:
Traceback (most recent call last): File "scripts/txt2img.py", line 388, in <module> main(opt) File "scripts/txt2img.py", line 342, in main uc = model.get_learned_conditioning(batch_size * [""]) File "d:\runcodes\stablediffusion-v2\ldm\models\diffusion\ddpm.py", line 665, in get_learned_conditioning c = self.cond_stage_model.encode(c) File "d:\runcodes\stablediffusion-v2\ldm\modules\encoders\modules.py", line 237, in encode return self(text) File "D:\ProgramData\envs\stablediffusionv2\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "d:\runcodes\stablediffusion-v2\ldm\modules\encoders\modules.py", line 214, in forward z = self.encode_with_transformer(tokens.to(self.device)) File "d:\runcodes\stablediffusion-v2\ldm\modules\encoders\modules.py", line 221, in encode_with_transformer x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask) File "d:\runcodes\stablediffusion-v2\ldm\modules\encoders\modules.py", line 233, in text_transformer_forward x = r(x, attn_mask=attn_mask) File "D:\ProgramData\envs\stablediffusionv2\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\ProgramData\envs\stablediffusionv2\lib\site-packages\open_clip\transformer.py", line 242, in forward x = q_x + self.ls_1(self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask)) File "D:\ProgramData\envs\stablediffusionv2\lib\site-packages\open_clip\transformer.py", line 228, in attention return self.attn( File "D:\ProgramData\envs\stablediffusionv2\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\ProgramData\envs\stablediffusionv2\lib\site-packages\torch\nn\modules\activation.py", line 1205, in forward attn_output, attn_output_weights = F.multi_head_attention_forward( File "D:\ProgramData\envs\stablediffusionv2\lib\site-packages\torch\nn\functional.py", line 5373, in multi_head_attention_forward attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal) RuntimeError: Expected attn_mask dtype to be bool or to match query dtype, but got attn_mask.dtype: float and query.dtype: struct c10::BFloat16 instead.
解决如下:
Add --device cuda
python scripts/txt2img.py --prompt “a professional photograph of an astronaut riding a horse” --ckpt ldm/models/768-v-ema.ckpt/> --config configs/stable-diffusion/v2-inference-v.yaml --H 768 --W 768 --device cuda
引用参考地址:
https://github.com/Stability-AI/stablediffusion/issues/203
https://github.com/Stability-AI/StableDiffusion
First, download the weights for SD2.1-v and SD2.1-base.
To sample from the SD2.1-v model, run the following:
部分有提示
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。