赞
踩
报错信息如下:
RuntimeError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 5.80 GiB total capacity; 4.39 GiB already allocated; 35.94 MiB free; 4.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
经过向ChatGPT询问,得到如下答案:
- import torch
- torch.cuda.set_per_process_memory_fraction(0.9, 0) # 可根据需要调整0.9的值
- torch.cuda.set_per_process_memory_growth(True)
-
-
- torch.cuda.set_max_split_size(max_size_mb=1024) # 根据需要调整max_size_mb的值
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。