赞
踩
神经网络的典型训练过程如下:
用torch.nn定义网络的参数结构,torch.nn.functional进行前向传播运算,torch.autograd做后向传播运算
- import torch
- import torch.nn as nn
- import torch.nn.functional as F
-
-
- class Net(nn.Module):
-
- def __init__(self):
- super(Net, self).__init__()
- # 1 input image channel, 6 output channels, 3x3 square convolution
- # kernel
- self.conv1 = nn.Conv2d(1, 6, 3)
- self.conv2 = nn.Conv2d(6, 16, 3)
- # an affine operation: y = Wx + b
- self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension
- self.fc2 = nn.Linear(120, 84)
- self.fc3 = nn.Linear(84, 10)
-
- def forward(self, x):
- # Max pooling over a (2, 2) window
- x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
- # If the size is a square you can only specify a single number
- x = F.max_pool2d(F.relu(self.conv2(x)), 2)
- x = x.view(-1, self.num_flat_features(x))
- x = F.relu(self.fc1(x))
- x = F.relu(self.fc2(x))
- x = self.fc3(x)
- return x
-
- def num_flat_features(self, x):
- size = x.size()[1:] # all dimensions except the batch dimension
- num_features = 1
- for s in size:
- num_features *= s
- return num_features
-
-
- net = Net()
- print(net)
output:
- Net(
- (conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1))
- (conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1))
- (fc1): Linear(in_features=576, out_features=120, bias=True)
- (fc2): Linear(in_features=120, out_features=84, bias=True)
- (fc3): Linear(in_features=84, out_features=10, bias=True)
- )
用图来表示网络架构长这样:
Net类中包含了三个部分:__init__函数定义卷积核和全连接层的性质(每个数字分别代表了什么?);forward函数里定义所有的张量操作。(后向传播操作已经有autograd包指定,不需要自己定义)
代码解析
conv2d
class torch.nn.Conv2d(in_channels, (in_channels,out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
int
or tuple
) - 卷积核的尺寸int
or tuple
, optional
) - 卷积步长int
or tuple
, optional
) - 输入的每一条边补充0的层数int
or tuple
, optional
) – 卷积核元素之间的间距int
, optional
) – 从输入通道到输出通道的阻塞连接数bool
, optional
) - 如果bias=True
,添加偏置linear
- class torch.nn.Linear(in_features, out_features, bias=True)
对输入数据做线性变换:y=Ax+by=Ax+b
relu:非线性激活函数
torch.nn.functional.relu(input, inplace=False)
max_pool2d
torch.nn.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)
第一个pooling核大小为2*2,默认步长为0;第二个核大小为
- params = list(net.parameters())
- print(len(params))
- print(params[0].size()) # conv1's .weight
output:
- 10
- torch.Size([6, 1, 3, 3])
假设输入随机的32*32矩阵,out=net(input)表示用刚刚定义的网络计算出out。
- input = torch.randn(1, 1, 32, 32)
- out = net(input)
- print(out)
后向传播之前,先用net.zero_grad()把参数梯度存储清空。
- net.zero_grad()
- out.backward(torch.randn(1, 10))
注:torch.nn仅支持mini-batch输入。nn.Conv2d输入为4D张量,nSamples x nChannels x Height x Width
计算目标量和网络输出量的误差,有很多种定义。包中自带的其中一种为MES loss均方差损失。
- output = net(input)
- target = torch.randn(10) # a dummy target, for example
- target = target.view(1, -1) # make it the same shape as output
- criterion = nn.MSELoss()
-
- loss = criterion(output, target)
- print(loss)
- Out:
-
- tensor(0.6285, grad_fn=<MseLossBackward>)
由此我们计算出了output和target的误差,名为loss。
基于loss调用后向传播函数backward()。下面的代码查看了conv1在后向传播前后的偏差bias的变化,一开始0。
- net.zero_grad() # zeroes the gradient buffers of all parameters
-
- print('conv1.bias.grad before backward')
- print(net.conv1.bias.grad)
-
- loss.backward()
-
- print('conv1.bias.grad after backward')
- print(net.conv1.bias.grad)
torch.optim
随机梯度下降( Stochastic Gradient Descent ,SGD)是最实用简单的更新法则。
weight = weight - learning_rate * gradient
如果直接根据上述原理写代码,可以写作如下形式:
- learning_rate=0.01;
-
- for f in net.parameters():
- f.data.sub_(f.grad.data*learning_rate)
但这样在网络里面比较难用,所以有package叫torch.optim,包含了SGD在内的各种参数更新方法。
- import torch.optim as optim
-
- # create your optimizer
- optimizer=optim.SGD(net.parameters(),lr=0.01)
-
-
- # in your training loop:o
- optimizer.zero_grad() #把梯度缓存清为0
- out=net(input)
- loss=criterion(output,target)
- loss.backward()
- optimizer.step() #执行更新
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。