赞
踩
[root@bsyocr server-train]# tail trainall210722_6.log.txt
File "/home/server-train/pytorch_pretrained/modeling.py", line 300, in forward
mixed_query_layer = self.query(hidden_states)
File "/usr/local/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib64/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/usr/local/lib64/python3.6/site-packages/torch/nn/functional.py", line 1371, in linear
output = input.matmul(weight.t())
RuntimeError: cublas runtime error : resource allocation failed at /pytorch/aten/src/THC/THCGeneral.cpp:216
跑模型时出现RuntimeError: CUDA out of memory.错误 查阅了许多相关内容,原因是:GPU显存内存不够 简单总结一下解决方法: 将batch_size改小
本次修改bert 的pad.size 大小由2048 ->1024
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。