当前位置:   article > 正文

CUDA out of memory. If reserved memory is >> allocated memory try setting max_split_size_mb to avoid_if reserved memory is >> allocated memory try sett

if reserved memory is >> allocated memory try setting max_split_size_mb to a

pyrotch 训练显示显存不足,但是实际占用显存还有很多剩余:
CUDA out of memory. Tried to allocate 768.00 MiB (GPU 4; 44.37 GiB total capacity; 33.63 GiB already allocated; 560.56 MiB free; 42.34 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

是由于碎片内存问题,只需要在网络训练前将碎片内存释放即可
我是在每个batch前向前都运行如下代码

import os
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:128" #总是报显存不足的问题,是因为碎片没完全释放
if hasattr(torch.cuda, 'empty_cache'):
   torch.cuda.empty_cache()
  • 1
  • 2
  • 3
  • 4
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/很楠不爱3/article/detail/220627
推荐阅读
相关标签
  

闽ICP备14008679号