当前位置:   article > 正文

解决Expected all tensors to be on the same device, but found at least two devices, cuda:0

expected all tensors to be on the same device, but found at least two device

一、问题描述

在跑pytorch代码时报错:

Expected all tensors to be on the same device, but found at least two devices, cuda:0
  • 1

二、解决方案

判断参数数据是在cpu还是哪张gpu上:

import torch
import torch.nn as nn
 
# ----------- 判断模型是在CPU还是GPU上 ----------------------
 
model = nn.LSTM(input_size=10, hidden_size=4, num_layers=1, batch_first=True)
print(next(model.parameters()).device)  # 输出:cpu
 
model = model.cuda()
print(next(model.parameters()).device)  # 输出:cuda:0
 
model = model.cpu()
print(next(model.parameters()).device)  # 输出:cpu
 
# ----------- 判断数据是在CPU还是GPU上 ----------------------
 
data = torch.ones([2, 3])
print(data.device)  # 输出:cpu
 
data = data.cuda()
print(data.device)  # 输出:cuda:0
 
data = data.cpu()
print(data.device)  # 输出:cpu
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

查看模型的参数是在哪里(其实也可以直接data.is_cuda):

model = nn.LSTM(input_size=10, hidden_size=4, num_layers=1, batch_first=True)
print(next(model.parameters()).device)  
  • 1
  • 2

(1)可能是模型没有移动到和数据相同的device上:model.to(device)
(2)可能是input和参数没有在相同device上,如下类似的操作即可:

device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
data.to(device)
  • 1
  • 2
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/422059
推荐阅读
相关标签
  

闽ICP备14008679号