当前位置:   article > 正文

Pytorch使用过程错误与解决 -汇总~_sourcetensor.clone().detach()

sourcetensor.clone().detach()

error1:关键词 copy tensor

报错信息:

UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  • 1

解决办法1:
*** 当转换某个变量为tensor时,尽量使用torch.as_tensor()***

原错误代码:
state_tensor_list = [torch.tensor(i) for i in batch.state]

修改为:
state_tensor_list = [torch.as_tensor(i) for i in batch.state]
  • 1
  • 2
  • 3
  • 4
  • 5

解决办法2:
*** 当转换某个变量x为tensor时,尽量使用x.clone().detach() or x.clone().detach().requires_grad_(True) ***

原错误代码:
state_tensor_list = [torch.tensor(i) for i in batch.state]

修改为:
state_tensor_list = [i.clone().detach() for i in batch.state]
  • 1
  • 2
  • 3
  • 4
  • 5

error2:关键词 张量相加

参考链接:

报错信息:
RuntimeError: The size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 1
解决办法:
检查张量维度!!!

错误代码
next_state_values = torch.tensor([0.7056, 0.7165, 0.6326])
state_action_values=torch.tensor([[ 0.1139,  0.1139,  0.1139,  0.1139],
        [ 0.0884,  0.0884,  0.0884,  0.0884],
        [ 0.0019,  0.0019,  0.0019,  0.0019]])
print(next_state_values.shape)
print(state_action_values.shape)
print(next_state_values.size())
print(state_action_values.size())
next_state_values + state_action_values

结果:
torch.Size([3])
torch.Size([3, 4])
torch.Size([3])
torch.Size([3, 4])
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
修改代码:
next_state_values = torch.tensor([0.7056, 0.7165, 0.6326])
state_action_values=torch.tensor([[ 0.1139,  0.1139,  0.1139,  0.1139],
        [ 0.0884,  0.0884,  0.0884,  0.0884],
        [ 0.0019,  0.0019,  0.0019,  0.0019]]).max(1)[0]
print(next_state_values.shape)
print(state_action_values.shape)
print(next_state_values.size())
print(state_action_values.size())
next_state_values + state_action_values

结果:
torch.Size([3])
torch.Size([3])
torch.Size([3])
torch.Size([3])
tensor([0.8195, 0.8049, 0.6345])

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

error3:关键词 nn.Linear()的使用

参考链接:

报错1:

报错:RuntimeError: expected scalar type Long but found Float

报错代码:

import torch
import torch.nn as nn
class Net(nn.Module):
    def __init__(self,state_dim,mid_dim,action_dim):
        super().__init__()
        self.net = nn.Sequential(nn.Linear(state_dim, mid_dim))
    def forward(self,state):
        res = self.net(state)
        return res
net = Net(9,5,4)
print(net)
current_state = torch.tensor([0,0,0,0,0,0,0,0,0])
print(current_state.shape)
action = net(current_state)
print(action)

结果:
Net(
  (net): Sequential(
    (0): Linear(in_features=9, out_features=5, bias=True)
  )
)
torch.Size([9])
torch.Size([9])
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

错误原因:

将一维张量转换成二维张量后才能输入

报错2:

报错:RuntimeError: expected scalar type Float but found Long

报错代码:

import torch
import torch.nn as nn
class Net(nn.Module):
    def __init__(self,state_dim,mid_dim,action_dim):
        super().__init__()
        self.net = nn.Sequential(nn.Linear(state_dim, mid_dim))
    def forward(self,state):
        res = self.net(state)
        return res
net = Net(9,5,4)
print(net)
current_state = torch.tensor([0,0,0,0,0,0,0,0,0])
print(current_state.shape)
current_state = current_state.view(1,9)
print(current_state.shape)
action = net(current_state)
print(action)
结果:
Net(
  (net): S
  equential(
    (0): Linear(in_features=9, out_features=5, bias=True)
  )
)
torch.Size([9])
torch.Size([1, 9])
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

错误原因:

传入的张量数据类型应该为float32

解决办法

错误原因:

1、将张量转为二维使用.view(1,shape)
2、指定张量数据类型 dtype=torch.float32

正确代码

import torch
import torch.nn as nn
class Net(nn.Module):
    def __init__(self,state_dim,mid_dim,action_dim):
        super().__init__()
        self.net = nn.Sequential(nn.Linear(state_dim, mid_dim))
    def forward(self,state):
        res = self.net(state)
        return res
net = Net(9,5,4)
print(net)
current_state = torch.tensor([0,0,0,0,0,0,0,0,0],**dtype=torch.float32**)
print(current_state.shape)
**current_state = current_state.view(1,9)**
print(current_state.shape)
action = net(current_state)
print(action)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
本文内容由网友自发贡献,转载请注明出处:https://www.wpsshop.cn/w/weixin_40725706/article/detail/272292
推荐阅读
相关标签
  

闽ICP备14008679号