赞
踩
- 博主主页:@璞玉牧之
- 本文所在专栏:《PyTorch深度学习》
- 博主简介:21级大数据专业大学生,科研方向:深度学习,持续创作中
In PyTorch,the computational graph is in mini_batch fashion,so X and Y are 3x1 Tensors.
模型:y_hat = w * x + b (x与y_hat均属于实数)
mini_batch:想要一次性把三个样本[(x1,y1)、(x2,y2)、(x3,y3)]一起求出来
numpy的广播机制:两个矩阵相加时,把维度较小的矩阵扩充成与另一个矩阵维度相同的形式
y_hat1 = wx1 + b、y_hat2 = wx2 + b、y_hat3 = wx3 + b 转换为向量形式为:
ps:因为x和y_hat都是3x1的矩阵,所以b和w会进行自动广播,也变成3x1的矩阵
计算损失loss:
转换为向量形式:
代码实现
import torch
x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[2.0], [4.0], [6.0]])
模型是用来计算y_hat的
代码实现
class LinearModel(torch.nn.Module): #将模型定义为一个类,继承自Module
def __init__(self): #构造函数
super(LinearModel, self).__init__() #调用父类构造
self.linear = torch.nn.Linear(1, 1) #类后面加(1,1)构造对象,包含权重和偏置两个Tensor
def forward(self, x): #前馈
y_pred = self.linear(x) #对象后面加(x)表示实现了可调用的对象
return y_pred
model = LinearModel () #实例化,调用model(x)
参考文档:https://pytorch.org/docs/stable/generated/torch.nn.Linear.html#torch.nn.Linear
def func(a, b, c, x, y):
pass
func(1, 2, 3, x=4, y=5)
将变量替换成 *args,将其打印会输出一个元组;将变量替换成 **kwargs,将其打印会输出一个字典。
def func(*args, **kwargs):
print(args) # (1, 2, 3)
print(kwargs) # {'x': 4, 'y': 5}
func(1, 2, 3, x=4, y=5)
Example
class Foobar:
def __init__(self):
pass
def __call__(self, *args, **kwargs):
print('Hello' + str(args[0])) # Hello1
foobar = Foobar()
foobar(1, 2, 3)
使用PyTorch的应用接口(API)
代码实现
criterion = torch.nn.MSELoss(size_average=False) # MSE也继承自nn.Module;criterion需要的参数是y_hat和y
optimizer = torch.optim.SGD(model.parameters(), lr=0.01) #优化器来自optim的SGD类,做实例化,第一个参数是权重,parameters会检查model中的所有成员,若成员里有相应权重,就都加到训练的参数集合上。lr:学习率
包括三步:forward(算损失)、backward(算梯度)、update(用梯度下降算法更新权重)
代码实现
for epoch in range(100):
y_pred = model(x_data) # step1:计算y_hat
loss = criterion(y_pred, y_data) # step2:计算损失Loss
print(epoch, loss) # loss是标量,打印时会自动调用__str__(); 不会产生计算图
optimizer.zero_grad() # step3:所有权重归零
loss.backward() # step4:反向传播Backward
optimizer.step() # step5:更新Update
# Output weight and bias
print('w = ', model.linear.weight.item()) # wieght是一个矩阵,所以打印是要加item()
print('b = ', model.linear.bias.item())
# Test Model
x_test = torch.Tensor([[4.0]])
y_test = model(x_test)
print('y_pred = ', y_test.data)
训练结果
出现报错:
TypeError: step() missing 1 required positional argument: ‘closure’
改正如下:
import torch import matplotlib.pyplot as plt x_data = torch.Tensor([[1.0], [2.0], [3.0]]) y_data = torch.Tensor([[2.0], [4.0], [6.0]]) class LinearModel(torch.nn.Module): def __init__(self): super(LinearModel, self).__init__() self.linear = torch.nn.Linear(1, 1) def forward(self, x): y_pred = self.linear(x) return y_pred model = LinearModel () criterion = torch.nn.MSELoss(size_average=False) # 5.LBFGS optimizer = torch.optim.LBFGS(model.parameters(), lr=0.01) epoch_list = [] loss_list = [] # 训练周期(前馈,反馈,更新) for epoch in range(1000): def closure(): optimizer.zero_grad() y_pred = model(x_data) loss = criterion(y_pred, y_data) print(epoch, loss.item()) epoch_list.append(epoch) loss_list.append(loss.item()) loss.backward() return loss optimizer.step(closure) print('w=', model.linear.weight.item()) print('b=', model.linear.bias.item()) x_test = torch.Tensor([[4.0]]) y_test = model(x_test) print('y_pred =', y_test.data) plt.plot(epoch_list, loss_list) # 横纵坐标值 plt.xlabel('Epoch') # x轴名称 plt.ylabel('Loss') # y轴名称 plt.title('LBFGS') # 图标题 plt.show()
import torch import matplotlib.pyplot as plt x_data = torch.Tensor([[1.0], [2.0], [3.0]]) y_data = torch.Tensor([[2.0], [4.0], [6.0]]) class LinearModel(torch.nn.Module): #将模型定义为一个类,继承自Module def __init__(self): #构造函数 super(LinearModel, self).__init__() #调用父类构造 self.linear = torch.nn.Linear(1, 1) #类后面加(1,1)构造对象,包含权重和偏置两个Tensor def forward(self, x): #前馈 y_pred = self.linear(x) #对象后面加(x)表示实现了可调用的对象 return y_pred model = LinearModel () #实例化,调用model(x) criterion = torch.nn.MSELoss(size_average=False) # MSE也继承自nn.Module;criterion需要的参数是y_hat和y # 1、Adagrad # optimizer = torch.optim.Adagrad(model.parameters(), lr=0.01) # 2、Adam # optimizer = torch.optim.Adam(model.parameters(), lr=0.01) # 3、Adamax # optimizer = torch.optim.Adamax(model.parameters(), lr=0.01) # 4、ASGD # optimizer = torch.optim.ASGD(model.parameters(), lr=0.01) # 5、LBFGS # optimizer = torch.optim.LBFGS(model.parameters(), lr=1) # 6、RMSprop # optimizer = torch.optim.RMSprop(model.parameters(), lr=0.01) # 7、Rprop # optimizer = torch.optim.Rprop(model.parameters(), lr=0.01) # 8、SGD optimizer = torch.optim.SGD(model.parameters(), lr=0.01) #优化器来自optim的SGD类,做实例化,第一个参数是权重,parameters会检查model中的所有成员,若成员里有相应权重,就都加到训练的参数集合上。lr:学习率 epoch_list = [] loss_list = [] for epoch in range(100): y_pred = model(x_data) # step1:计算y_hat loss = criterion(y_pred, y_data) # step2:计算损失Loss print(epoch, loss) # loss是标量,打印时会自动调用__str__(); 不会产生计算图 epoch_list.append(epoch) loss_list.append(loss.item()) optimizer.zero_grad() # step3:所有权重归零 loss.backward() # step4:反向传播Backward optimizer.step() # step5:更新Update # Output weight and bias print('w = ', model.linear.weight.item()) # wieght是一个矩阵,所以打印是要加item() print('b = ', model.linear.bias.item()) # Test Model x_test = torch.Tensor([[4.0]]) y_test = model(x_test) print('y_pred = ', y_test.data) # 画图 plt.plot(epoch_list, loss_list) # 横纵坐标值 plt.xlabel('Epoch') # x轴名称 plt.ylabel('Loss') # y轴名称 plt.title('SGD') # 图标题 plt.show() # 展示
https://pytorch.org/tutorials/beginner/pytorch_with_examples.html
本文参考:《PyTorch深度学习实践》
我是璞玉牧之,持续输出优质文章,希望和你一起学习进步!!!原创不易,如果本文对你有帮助,可以 点赞+收藏+评论 支持一下哦!我们下期见~~
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。