赞
踩
- device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
- model.to(device)
- model.eval()
- for i in df:
- content = i.get('content')
- if len(content)>=30:
- test = tokenizer(content, padding="max_length", max_length=256, truncation=True,
- return_tensors="pt")
-
- test.to(device)
- with torch.no_grad():
- outputs = model(test["input_ids"],
- token_type_ids=None,
- attention_mask=test["attention_mask"])
- pred_flat = np.argmax(outputs["logits"].cpu(), axis=1).numpy().squeeze()
gpu测试模型时,需要将模型和数据tokenlier后的张量都放入gpu模式允许,并且结果需要放入cpu输出。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。