当前位置:   article > 正文

调试代码——报错记录2_importerror: cannot import name '_newemptytensorop

importerror: cannot import name '_newemptytensorop' from 'torchvision.ops.mi

①报错:ImportError: cannot import name '_new_empty_tensor' from 'torchvision.ops' (D:\python\lib\site-packages\torchvision\ops\__init__.py)

找到报错位置:

  1. import torchvision
  2. if float(torchvision.__version__[:3]) < 0.7:
  3. from torchvision.ops import _new_empty_tensor
  4. from torchvision.ops.misc import _output_size

利用print(torchvision.__version__)查看Torchvision版本

  1. import torchvision
  2. print(torchvision.__version__)

结果显示:

0.10.0+cu111

float(torchvision.__version__[:3])代码只检查前3个字符,因此它认为我的版本是0.1,故出现报错。

查找资料可注释掉相关的代码即可,这样做需要仔细查看但代码比较麻烦,想到一个办法,即将代码改成:

  1. import torchvision
  2. if float(torchvision.__version__[2:4]) < 7:
  3. from torchvision.ops import _new_empty_tensor
  4. from torchvision.ops.misc import _output_size

成功解决ImportError: cannot import name '_new_empty_tensor' from 'torchvision.ops' (D:\python\lib\site-packages\torchvision\ops\__init__.py)

②报错:RuntimeError: CUDA out of memory. Tried to allocate XXX MiB

减小batchsize即可

  1. parser.add_argument('--batch_size', default=1, type=int, help='default=16')
  2. parser.add_argument('--weight_decay', default=1e-4, type=float)
  3. parser.add_argument('--epochs', default=90, type=int)

③报错:OSError: [WinError 1455] 页面文件太小,无法完成操作。

把num_workers设置为0,num_workers是工作进程数,一般这个数量设置值是自己电脑/服务器的CPU核心数。

修改batch_size的值

④报错:RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor

  1. File "D:\python\lib\site-packages\torch\nn\utils\rnn.py", line 249, in pack_padded_sequence
  2. _VF._pack_padded_sequence(input, lengths, batch_first)
  3. RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor

找到报错的行:

  1. data, batch_sizes = \
  2. _VF._pack_padded_sequence(input, lengths, batch_first)
  3. return _packed_sequence_init(data, batch_sizes, sorted_indices, None)

修改后:

  1. data, batch_sizes = \
  2. _VF._pack_padded_sequence(input, lengths.cpu(), batch_first)
  3. return _packed_sequence_init(data, batch_sizes, sorted_indices, None)

原因:Pytorch1.5以上版本升级了Bi-LSTM导致的。

⑤报错:RuntimeError: input.size(-1) must be equal to input_size. Expected 300, got 96

在配置文件里修改一下

⑥报错:RuntimeError: Found dtype Double but expected Float

⑦报错:RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 2.00 GiB total capacity; 1.02 GiB already allocated; 6.88 MiB free; 1.03 GiB reserved in total by PyTorch)

找到报错位置:

  1. for layer in self.layers:
  2. output = layer(output, src_mask=mask,
  3. src_key_padding_mask=src_key_padding_mask, pos=pos)

插入代码 with torch.no_grad()(目的是该段程序不计算参数梯度)

修改后:

  1. with torch.no_grad():
  2. for layer in self.layers:
  3. output = layer(output, src_mask=mask,
  4. src_key_padding_mask=src_key_padding_mask, pos=pos)

运行过程:

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/149557
推荐阅读
相关标签
  

闽ICP备14008679号