当前位置:   article > 正文

PyTorch-网络模型的保存和读取_pytorch怎么把神经网络模型保存为图片

pytorch怎么把神经网络模型保存为图片

1. 模型的保存

方法一:保存模型的结构+模型的参数

陷阱:需要让文件访问到你自己的模型定义方式,可以用import的方式引入先前的模型定义。

model_save.py

  1. import torch
  2. import torchvision
  3. vgg16 = torchvision.models.vgg16(weights=None)
  4. # 保存方式一
  5. torch.save(vgg16, 'vgg16_method1.pth')

方法二:保存模型的参数(官方推荐,文件小一些)

model_save.py

  1. import torch
  2. import torchvision
  3. vgg16 = torchvision.models.vgg16(weights=None)
  4. # 保存方式二 保存网络模型的参数
  5. torch.save(vgg16.state_dict(), 'vgg16_method2.pth')

2. 模型的加载

model_load.py(对应方法一的)

  1. import torch
  2. # 加载模型
  3. model = torch.load('vgg16_method1.pth')
  4. print(model)

VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace=True)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace=True)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace=True)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace=True)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace=True)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace=True)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace=True)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace=True)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace=True)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace=True)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace=True)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0.5, inplace=False)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)

Process finished with exit code 0

model_load.py(对应方法二的)

  1. model2 = torch.load('vgg16_method2.pth')
  2. print(model2)

OrderedDict([('features.0.weight', tensor([[[[ 0.0588, -0.0743, -0.1424],
          [-0.0034,  0.0577,  0.0819],
          [-0.0233, -0.0427,  0.1821]],

         [[ 0.0583, -0.0244,  0.0121],
          [ 0.0243, -0.0532,  0.0252],
          [-0.0372,  0.0098,  0.0754]],

         [[ 0.0480,  0.0094,  0.0544],
          [-0.0291, -0.0081,  0.0834],
          [-0.0282,  0.0537, -0.0363]]],

......

若要恢复网络模型: 

  1. import torch
  2. import torchvision
  3. # 加载模型
  4. vgg16 = torchvision.models.vgg16(weights=None)
  5. vgg16.load_state_dict(torch.load('vgg16_method2.pth'))
  6. print(vgg16)

VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace=True)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace=True)

......

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/103366
推荐阅读
相关标签
  

闽ICP备14008679号