赞
踩
博主最近在写这篇文章的时候,发现即使我使用了backward()
,也无法得到我想要的参数的梯度。
博主当然知道pytorch默认只保存叶结点的grad,对于那些中间结点的grad则是一律丢弃。博主当然也知道使用register_hook来获取中间结点的grad。然而这些都不能解决我的问题。我的问题代码如下:
- import torch
- import numpy as np
-
- device = torch.device('cuda')
- x = torch.tensor(np.random.normal(0, 1, [2,3]), requires_grad = True).to(device)
- print("grad1: ",x.requires_grad)
- y = 2 * x + 3
- z = y.sum()
- z.backward()
- print("grad2: ",x.requires_grad)
- print("grad3: ",x.grad)
输出为:
- grad1: True
- grad2: True
- grad3: None
从上面的输出可以看出,虽然x的requires_grad信息一直都是True,但是,经过backward()并之后,它的grad依然是None。博主百思不得其解,直到我对代码做出了如下改动:
- import torch
- import numpy as np
-
- device = torch.device('cuda')
- x = torch.tensor(np.random.normal(0, 1, [2,3]), requires_grad = True)
- print("x grad1: ",x.requires_grad)
- x1 = x.to(device)
- y = 2 * x + 3
- z = y.sum()
- z.backward()
- print("x1 grad1: ",x1.requires_grad)
- print("x1 grad2:", x1.grad)
- print("grad3: ",x.grad)
我把to(device)跟x的赋值过程给分开了,因为to(device)本质上是将x复制了一份张量到GPU上。因此GPU上的那份张量(即x1)其实是中间结点,只有x是叶结点,因此也就只有x的grad得到了保存。
所以今后我们在写类似的代码时,尽量用以下两种写法来写:
- import torch
- import numpy as np
-
- device = torch.device('cuda')
- x = torch.tensor(np.random.normal(0, 1, [2,3]), device = device, requires_grad = True)
- print("x grad1: ",x.requires_grad)
- y = 2 * x + 3
- z = y.sum()
- z.backward()
- print("grad2: ",x.grad)
输出为:
- x grad1: True
- grad2: tensor([[2., 2., 2.],
- [2., 2., 2.]], device='cuda:0', dtype=torch.float64)
- import torch
- import numpy as np
-
- device = torch.device('cuda')
- x = torch.tensor(np.random.normal(0, 1, [2,3])).to(device)
- x.requires_grad_(True)
- print("x grad1: ",x.requires_grad)
- y = 2 * x + 3
- z = y.sum()
- z.backward()
- print("grad2: ",x.grad)
输出为:
- x grad1: True
- grad2: tensor([[2., 2., 2.],
- [2., 2., 2.]], device='cuda:0', dtype=torch.float64)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。