赞
踩
用数据对模型进行训练后得到了比较理想的模型,但在实际应用的时候不可能每次都先进行训练然后再使用,所以就得先将之前训练好的模型保存下来,然后在需要用到的时候加载一下直接使用。模型的本质是一堆用某种结构存储起来的参数,所以在保存的时候有两种方式,一种方式是直接将整个模型保存下来,之后直接加载整个模型,但这样会比较耗内存;另一种是只保存模型的参数,之后用到的时候再创建一个同样结构的新模型,然后把所保存的参数导入新模型。
- # 保存:
-
- torch.save(model.state_dict(), mymodel.pth)#只保存模型权重参数,不保存模型结构
-
- # 调用:
-
- model = My_model(*args, **kwargs) #这里需要重新模型结构,My_model
- model.load_state_dict(torch.load(mymodel.pth))#这里根据模型结构,调用存储的模型参数
- model.eval()
-
- # 保存:
-
- torch.save(model, mymodel.pth) #保存整个model的状态
-
- # 调用:
-
- model=torch.load(mymodel.pth) #这里已经不需要重构模型结构了,直接load就可以
- model.eval()
pytorch会把模型的参数放在一个字典里面,而我们所要做的就是将这个字典保存,然后再调用。
比如说设计一个单层LSTM的网络,然后进行训练,训练完之后将模型的参数字典进行保存,保存为同文件夹下面的rnn.pkl(或者pth文件或者pt文件)文件:
我们经常会看到后缀名为.pt, .pth, .pkl的pytorch模型文件,这几种模型文件在格式上有什么区别吗?其实它们并不是在格式上有区别,只是后缀不同而已(仅此而已),在用torch.save()函数保存模型文件时,各人有不同的喜好,有些人喜欢用.pt后缀,有些人喜欢用.pth或.pkl.用相同的torch.save()语句保存出来的模型文件没有什么不同。
在pytorch官方的文档/代码里,有用.pt的,也有用.pth的。一般惯例是使用.pth,但是官方文档里貌似.pt更多,而且官方也不是很在意固定用一种。
- class LSTM(nn.Module):
- def __init__(self, input_size, hidden_size, num_layers):
- super(LSTM, self).__init__()
- self.hidden_size = hidden_size
- self.num_layers = num_layers
- self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
- self.fc = nn.Linear(hidden_size, 1)
-
- def forward(self, x):
- # Set initial states
- h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
- # 2 for bidirection
- c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
- # Forward propagate LSTM
- out, _ = self.lstm(x, (h0, c0))
- # out: tensor of shape (batch_size, seq_length, hidden_size*2)
- out = self.fc(out)
- return out
-
-
- rnn = LSTM(input_size=1, hidden_size=10, num_layers=2).to(device)
-
- # optimize all cnn parameters
- optimizer = torch.optim.Adam(rnn.parameters(), lr=0.001)
- # the target label is not one-hotted
- loss_func = nn.MSELoss()
-
- for epoch in range(1000):
- output = rnn(train_tensor) # cnn output`
- loss = loss_func(output, train_labels_tensor) # cross entropy loss
- optimizer.zero_grad() # clear gradients for this training step
- loss.backward() # backpropagation, compute gradients
- optimizer.step() # apply gradients
- output_sum = output
-
-
- # 保存模型
- torch.save(rnn.state_dict(), 'rnn.pt')
保存完之后利用这个训练完的模型对数据进行处理:
- # 测试所保存的模型
- m_state_dict = torch.load('rnn.pt')
- new_m = LSTM(input_size=1, hidden_size=10, num_layers=2).to(device)
- new_m.load_state_dict(m_state_dict)
- predict = new_m(test_tensor)
注意:保存模型和加载模型使用的模型结构必须是一样的(在上例代码中,实例化一个LSTM对像,这里要保证传入的参数跟实例化rnn是传入的对象时一样的,即结构相同new_m = LSTM(input_size=1, hidden_size=10, num_layers=2).to(device)
;)。最简单的方法其实是直接拷贝一份原来的Python脚本,在里面加上一行就行了。
new_m.load_state_dict(m_state_dict)
- class LSTM(nn.Module):
- def __init__(self, input_size, hidden_size, num_layers):
- super(LSTM, self).__init__()
- self.hidden_size = hidden_size
- self.num_layers = num_layers
- self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
- self.fc = nn.Linear(hidden_size, 1)
-
- def forward(self, x):
- # Set initial states
- h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) # 2 for bidirection
- c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
-
- # Forward propagate LSTM
- out, _ = self.lstm(x, (h0, c0)) # out: tensor of shape (batch_size, seq_length, hidden_size*2)
- # print("output_in=", out.shape)
- # print("fc_in_shape=", out[:, -1, :].shape)
- # Decode the hidden state of the last time step
- # out = torch.cat((out[:, 0, :], out[-1, :, :]), axis=0)
- # out = self.fc(out[:, -1, :]) # 取最后一列为out
- out = self.fc(out)
- return out
-
-
- rnn = LSTM(input_size=1, hidden_size=10, num_layers=2).to(device)
- print(rnn)
-
-
- optimizer = torch.optim.Adam(rnn.parameters(), lr=0.001) # optimize all cnn parameters
- loss_func = nn.MSELoss() # the target label is not one-hotted
-
- for epoch in range(1000):
- output = rnn(train_tensor) # cnn output`
- loss = loss_func(output, train_labels_tensor) # cross entropy loss
- optimizer.zero_grad() # clear gradients for this training step
- loss.backward() # backpropagation, compute gradients
- optimizer.step() # apply gradients
- output_sum = output
-
-
- # 保存模型
-
- torch.save(rnn, 'rnn1.pt')
保存完之后利用这个训练完的模型对数据进行处理:
- new_m = torch.load('rnn1.pt')
- predict = new_m(test_tensor)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。