赞
踩
问题如下:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.12 GiB (GPU 0; 23.69 GiB total capacity; 20.35 GiB already allocated; 329.19 MiB free; 23.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
解决方案:在代码中增加环境变量设置
os.environ['PYTORCH_CUDA_ALLOC_CONF'] = 'roundup_power2_divisions:[256:1,512:2,1024:4,>:8],max_split_size_mb:128'
出错原因说白了就是GPU的内存不够了。解决思路:
float32
改成 float16
torch.cuda.empty_cache()
这边我选择第五种思路。根据错误提示找到PyTorch相关文档,把里面所有选项都试一遍。参数设置格式PYTORCH_CUDA_ALLOC_CONF=<option>:<value>,<option2>:<value2>...
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。