当前位置:   article > 正文

Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when

expected all tensors to be on the same device, but found at least two device
  1. device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
  2. model.to(device)
  3. model.eval()
  4. for i in df:
  5. content = i.get('content')
  6. if len(content)>=30:
  7. test = tokenizer(content, padding="max_length", max_length=256, truncation=True,
  8. return_tensors="pt")
  9. test.to(device)
  10. with torch.no_grad():
  11. outputs = model(test["input_ids"],
  12. token_type_ids=None,
  13. attention_mask=test["attention_mask"])
  14. pred_flat = np.argmax(outputs["logits"].cpu(), axis=1).numpy().squeeze()

gpu测试模型时,需要将模型和数据tokenlier后的张量都放入gpu模式允许,并且结果需要放入cpu输出。

本文内容由网友自发贡献,转载请注明出处:https://www.wpsshop.cn/w/Gausst松鼠会/article/detail/149489
推荐阅读
相关标签
  

闽ICP备14008679号