当前位置:   article > 正文

PyTorch 08 —预训练模型(迁移学习)_pytorch 预训练模型

pytorch 预训练模型

一、什么是预训练网络

预训练网络是一个保存好的之前已在大型数据集(大规模图像分类任务)上训练好的卷积神经网络。如果这个原始数据集足够大且足够通用,那么预训练网络学到的特征的空间层次结构可以作为有效的提取视觉世界特征的模型。

即使新问题和新任务与原始任务完全不同,学习到的特征在不同问题之间是可移植的,这也是深度学习与浅层学习方法的一个重要优势。它使得深度学习对于小数据问题非常的有效。即便是咱门这些数据集非常小,比如上一章的四分类任务,仅仅不到1000张图片,这个时候就可以去移植预训练模型,使得它使用已经训练好的模型来对这个四种图片提取它的特征,从而训练一个分类模型出来。

ImageNet是一个手动标注好类别的图片数据库(为了机器视觉研究),目前已有22,000个类别。当我们在深度学习和卷积神经网络的背景下听到“ImageNet”一词时,我们可能会提到ImageNet视觉识别比赛,称为ILSVRC。这个图片分类比赛是训练一个模型,能够将输入图片正确分类到1000个类别中的某个类别。训练集120万,验证集5万,测试集10万。这1,000个图片类别是我们在日常生活中遇到的,例如狗,猫,各种家居物品,车辆类型等等。

在图像分类方面,ImageNet比赛准确率已经作为计算机视觉分类算法的基准。自2012年以来,卷积神经网络和深度学习技术主导了这一比赛的排行榜。

PyTorch内置预训练网络

Pytorch库中包含 VGG16、VGG19、densenet ResNetmobilenet、Inception v3等经典的模型架构。 

二、预训练网络VGG16

VGG介绍 

VGG全称是Visual Geometry Group,属于牛津大学科学工程系,其发布了一些列以VGG开头的卷积网络模型,可以应用在人脸识别、图像分类等方面,分别从VGG16VGG19

在2014年,VGG模型架构由Simonyan和Zisserman提出,在“极深的大规模图像识别卷积网络”(Very Deep Convolutional Networks for Large Scale Image Recognition)这篇论文中有介绍。

VGG研究卷积网络深度的初衷是想搞清楚卷积网络深度是如何影响大规模图像分类与识别的精度和准确率的,最初是VGG-16, 号称非常深的卷积网络全称为(GG-Very-Deep-16 CNN)

VGG模型结构简单有效,前几层仅使用3×3卷积核来增加网络深度,通过max pooling(最大池化)依次减少每层的神经元数量,最后三层分别是2个有4096个神经元的全连接层和一个softmax层。 

 

VGG在加深网络层数同时为了避免参数过多,在所有层都采用3x3的小卷积核,卷积层步长被设置为1。VGG的输入被设置为224x244大小的RGB图像,在训练集图像上对所有图像计算RGB均值,然后把图像作为输入传入VGG卷积网络,使用3x3或者1x1filter,卷积步长被固定1

 VGG全连接层有3层,根据卷积层+全连接层总数目的不同可以从VGG11 VGG19,最少的VGG11有8个卷积层与3个全连接层,最多的VGG1916个卷积层+3个全连接层,此外VGG网络并不是在每个卷积层后面跟上一个池化层,还是总数5池化层,分布在不同的卷积层之下.

conv示卷积层;FC表示全连接层(dense);Conv3 表示卷积层使用3x3 filters ;conv3-64表示 深度64 ;maxpool表示最大池化。

在实际处理中还可以对第一个全连接层改为7x7的卷积网络,后面两个全连接层改为1x1的卷积网络,这个整个VGG就变成一个全卷积网络FCN

代码实战

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. import torch.optim as optim
  5. import numpy as np
  6. import matplotlib.pyplot as plt
  7. %matplotlib inline
  8. import torchvision
  9. from torchvision import transforms
  10. import os
  11. """(1)加载VGG模型"""
  12. print('(1)加载VGG模型')
  13. base_dir = r'../dataset2/4weather'
  14. train_dir = os.path.join(base_dir , 'train')
  15. test_dir = os.path.join(base_dir , 'test')
  16. transform = transforms.Compose([
  17. transforms.Resize((192, 192)), # 如果图片太小,经过几次池化后,图片也许还不如卷积核大,就会报错
  18. transforms.ToTensor(),
  19. transforms.Normalize(mean=[.5, .5, .5], std=[.5, .5, .5])
  20. ])
  21. train_ds = torchvision.datasets.ImageFolder(train_dir,transform=transform)
  22. test_ds = torchvision.datasets.ImageFolder(test_dir,transform=transform)
  23. BTACH_SIZE = 16
  24. train_dl = torch.utils.data.DataLoader(
  25. train_ds,
  26. batch_size=BTACH_SIZE,
  27. shuffle=True
  28. )
  29. test_dl = torch.utils.data.DataLoader(
  30. test_ds,
  31. batch_size=BTACH_SIZE,
  32. )
  33. # torchvision里的datasets里面有预训练数据;models里面有预训练模型。pretrained参数表示是否是预训练的;pretrained默认不是预训练的
  34. # 即表示只是引入模型的架构,这些参数、权重并不是经过imageNet数据集上经过训练的。pretrained=True,现在加载这个模型的话,不仅
  35. # 是加载它的框架,它的权重也被设置为在imageNet上经过训练好的权重。
  36. # pth是PyTorch的简写,pytorch在保存模型的时候都喜欢将这个模型保存为 .pth 结尾。
  37. model = torchvision.models.vgg16(pretrained=True) # C:\Users\admin/.torch\models
  38. print(model)

 显示模型结果:

 


预训练模型的使用 

我们实际上使用了预训练模型当中的卷积基部分来提取特征,因为预训练模型已经在imageNet上预训练好了,所以它的卷及部分可以有效的去提取图片当中的特性。然后重新训练它的分类器(全连接层)。 

  1. """(2)更改模型"""
  2. print('\n(2)更改模型')
  3. # 2-1、将卷积部分冻结(卷积部分就是features部分)
  4. for p in model.features.parameters(): # 提取模型的卷积部分的参数,让它冻结,不再进行梯度的更新
  5. p.requires_grad = False
  6. # 2-2、更改分类器,原分类器是输出到1000个分类上,我们需要输出到4分类
  7. model.classifier[-1].out_features = 4
  8. # 下面就可以对这个模型进行训练了
  9. if torch.cuda.is_available():
  10. model.to('cuda')
  11. optimizer = torch.optim.Adam(model.classifier.parameters(),lr=0.001) # 优化器
  12. loss_fn = nn.CrossEntropyLoss() # softmax交叉熵 损失函数
  13. def fit(epoch, model, trainloader, testloader): # 训练函数
  14. correct = 0
  15. total = 0
  16. running_loss = 0
  17. model.train()
  18. for x, y in trainloader:
  19. if torch.cuda.is_available():
  20. x, y = x.to('cuda'), y.to('cuda')
  21. y_pred = model(x)
  22. loss = loss_fn(y_pred, y)
  23. optimizer.zero_grad()
  24. loss.backward()
  25. optimizer.step()
  26. with torch.no_grad():
  27. y_pred = torch.argmax(y_pred, dim=1)
  28. correct += (y_pred == y).sum().item()
  29. total += y.size(0)
  30. running_loss += loss.item()
  31. # exp_lr_scheduler.step()
  32. epoch_loss = running_loss / len(trainloader.dataset)
  33. epoch_acc = correct / total
  34. test_correct = 0
  35. test_total = 0
  36. test_running_loss = 0
  37. with torch.no_grad():
  38. model.eval()
  39. for x, y in testloader:
  40. if torch.cuda.is_available():
  41. x, y = x.to('cuda'), y.to('cuda')
  42. y_pred = model(x)
  43. loss = loss_fn(y_pred, y)
  44. y_pred = torch.argmax(y_pred, dim=1)
  45. test_correct += (y_pred == y).sum().item()
  46. test_total += y.size(0)
  47. test_running_loss += loss.item()
  48. epoch_test_loss = test_running_loss / len(testloader.dataset)
  49. epoch_test_acc = test_correct / test_total
  50. print(
  51. 'epoch: ', epoch,
  52. 'loss: ', round(epoch_loss, 3),
  53. 'accuracy:', round(epoch_acc, 3),
  54. 'test_loss: ', round(epoch_test_loss, 3),
  55. 'test_accuracy:', round(epoch_test_acc, 3)
  56. )
  57. return epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc
  58. epochs = 10
  59. train_loss = []
  60. train_acc = []
  61. test_loss = []
  62. test_acc = []
  63. for epoch in range(epochs):
  64. epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc = fit(epoch,model,train_dl,test_dl)
  65. train_loss.append(epoch_loss)
  66. train_acc.append(epoch_acc)
  67. test_loss.append(epoch_test_loss)
  68. test_acc.append(epoch_test_acc)
  69. plt.figure(0)
  70. plt.plot(range(1, epochs+1), train_loss, label='train_loss')
  71. plt.plot(range(1, epochs+1), test_loss, label='test_loss')
  72. plt.legend()
  73. plt.show()
  74. plt.figure(1)
  75. plt.plot(range(1, epochs+1), train_acc, label='train_acc')
  76. plt.plot(range(1, epochs+1), test_acc, label='test_acc')
  77. plt.legend()
  78. plt.show()

 三、数据增强和学习速率衰减

数据增强

为了解决过拟合,还可以使用数据增强,什么是数据增强呢?就是人为的去扩充这个数据,比如说训练的图片全是白天的图片,可以人为的将白天图片减低它的亮度,模拟黑夜的情况。再比如说,训练集中有张人脸照片只有左边耳朵,可以将这张图片进行翻转变成右边耳朵。

  1. transforms.RandomCrop # 随机位置裁剪 。transforms.CenterCrop叫从中间裁剪
  2. transforms.RandomHorizontalFlip(p=1) # 随机水平翻转。p代表比例,以一定的比例翻转过来
  3. transforms.RandomVerticalFlip(p=1) # 随机上下翻转
  4. transforms.RandomRotation # 随机旋转
  5. transforms.ColorJitter(brightness=1) # 进行颜色的随机变换。brightness明暗度,contrast对比度,saturation饱和度
  6. transforms.ColorJitter(contrast=1)
  7. transforms.ColorJitter(saturation=0.5)
  8. transforms.ColorJitter(hue=0.5) # 随机调整颜色
  9. transforms.RandomGrayscale(p=0.5) # 随机灰度化

 

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. import torch.optim as optim
  5. import numpy as np
  6. import matplotlib.pyplot as plt
  7. import torchvision
  8. from torchvision import transforms
  9. import os
  10. base_dir = r'./datasets/4weather'
  11. train_dir = os.path.join(base_dir , 'train')
  12. test_dir = os.path.join(base_dir , 'test')
  13. """(1)数据增强的方法"""
  14. transforms.RandomCrop # 随机位置裁剪。 transforms.CenterCrop(从中间裁剪)
  15. transforms.RandomHorizontalFlip(p=1) # 随机水平翻转(p是翻转比例)
  16. transforms.RandomVerticalFlip(p=1) # 随机上下翻转
  17. transforms.RandomRotation # 随机旋转
  18. transforms.ColorJitter(brightness=1) # 颜色随机。brightness明暗度
  19. transforms.ColorJitter(contrast=1) # 随机调整对比度
  20. transforms.ColorJitter(saturation=0.5) # 饱和度
  21. transforms.ColorJitter(hue=0.5) # 随机调整颜色
  22. transforms.RandomGrayscale(p=0.5) # 随机灰度化
  23. # 对于训练数据要进行数据增强,测试的时候没有必要数据增强
  24. train_transform = transforms.Compose([
  25. transforms.Resize(224),
  26. transforms.RandomCrop(192), # 随机的从224的图片裁剪出192的图片出来;让这个模型每一次看到图片当中的不同位置
  27. transforms.RandomHorizontalFlip(), # 随机的水平(左右)翻转。
  28. transforms.RandomRotation(0.2), # 随机的旋转一个角度,角度是0.2(20度)
  29. transforms.ColorJitter(brightness=0.5), # 将图像的亮度随机变化为原图亮度的50%(1−0.5)∼150%(1+0.5)
  30. transforms.ColorJitter(contrast=0.5), # 随机的改变对比度
  31. transforms.ToTensor(),
  32. transforms.Normalize(mean=[.5, .5, .5], std=[.5, .5, .5])
  33. ])
  34. test_transform = transforms.Compose([
  35. transforms.Resize((192, 192)),
  36. transforms.ToTensor(),
  37. transforms.Normalize(mean=[.5, .5, .5], std=[.5, .5, .5])
  38. ])
  39. train_ds = torchvision.datasets.ImageFolder(train_dir,transform=train_transform)
  40. test_ds = torchvision.datasets.ImageFolder(test_dir,transform=test_transform)
  41. BTACH_SIZE = 16
  42. train_dl = torch.utils.data.DataLoader(train_ds,batch_size=BTACH_SIZE,shuffle=True)
  43. test_dl = torch.utils.data.DataLoader(test_ds,batch_size=BTACH_SIZE)
  44. model = torchvision.models.vgg16(pretrained=True)
  45. for param in model.features.parameters():
  46. param.requires_grad = False
  47. model.classifier[-1].out_features = 4
  48. if torch.cuda.is_available():
  49. model.to('cuda')
  50. loss_fn = nn.CrossEntropyLoss()
  51. optimizer = torch.optim.Adam(model.classifier.parameters(), lr=0.0001)

 学习速率衰减

  1. # 学习速率衰减。取出optimizer的参数。人为
  2. # for p in optimizer.param_groups:
  3. # p['lr'] *= 0.9 # 以0.9的衰减系数衰减
  4. # pytorch为我们内置了学习速率衰减的方法
  5. from torch.optim import lr_scheduler
  6. optimizer = torch.optim.Adam(model.classifier.parameters(), lr=0.001)
  7. exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.9) # StepLR根据步数进行学习速率的衰减。每隔7步,乘以gamma衰减系数。
  8. # .MultiStepLR(),根据一个milestones进行衰减,milestone是一个列表,比如[10,20]就是在10、20个epoch上进行衰减
  9. lr_scheduler.MultiStepLR(optimizer,[10,15,18,25],gamma=0.9)
  10. # .Exponentia1LR(optimizer,gamma=0.9)。在每一个epoch上进行衰减
  11. def fit(epoch, model, trainloader, testloader):
  12. correct = 0
  13. total = 0
  14. running_loss = 0
  15. model('train')
  16. for x, y in trainloader:
  17. if torch.cuda.is_available():
  18. x, y = x.to('cuda'), y.to('cuda')
  19. y_pred = model(x)
  20. loss = loss_fn(y_pred, y)
  21. optimizer.zero_grad()
  22. loss.backward()
  23. optimizer.step()
  24. with torch.no_grad():
  25. y_pred = torch.argmax(y_pred, dim=1)
  26. correct += (y_pred == y).sum().item()
  27. total += y.size(0)
  28. running_loss += loss.item()
  29. exp_lr_scheduler.step() # 记录经过了多少个epoch(运行了多少次fit函数),只要到达7步,就会衰减。
  30. epoch_loss = running_loss / len(trainloader.dataset)
  31. epoch_acc = correct / total
  32. test_correct = 0
  33. test_total = 0
  34. test_running_loss = 0
  35. model('eval')
  36. with torch.no_grad():
  37. for x, y in testloader:
  38. if torch.cuda.is_available():
  39. x, y = x.to('cuda'), y.to('cuda')
  40. y_pred = model(x)
  41. loss = loss_fn(y_pred, y)
  42. y_pred = torch.argmax(y_pred, dim=1)
  43. test_correct += (y_pred == y).sum().item()
  44. test_total += y.size(0)
  45. test_running_loss += loss.item()
  46. epoch_test_loss = test_running_loss / len(testloader.dataset)
  47. epoch_test_acc = test_correct / test_total
  48. print('epoch: ', epoch,
  49. 'loss: ', round(epoch_loss, 3),
  50. 'accuracy:', round(epoch_acc, 3),
  51. 'test_loss: ', round(epoch_test_loss, 3),
  52. 'test_accuracy:', round(epoch_test_acc, 3)
  53. )
  54. return epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc
  55. epochs = 10
  56. train_loss = []
  57. train_acc = []
  58. test_loss = []
  59. test_acc = []
  60. for epoch in range(epochs):
  61. epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc = fit(epoch,
  62. model,
  63. train_dl,
  64. test_dl)
  65. # 人为的进行学习速率衰减
  66. # if(epoch%5 == 0):
  67. # for p in optimizer.param_groups:
  68. # p['lr'] *= 0.9
  69. train_loss.append(epoch_loss)
  70. train_acc.append(epoch_acc)
  71. test_loss.append(epoch_test_loss)
  72. test_acc.append(epoch_test_acc)
  1. plt.plot(range(1, epochs+1), train_loss, label='train_loss')
  2. plt.plot(range(1, epochs+1), test_loss, label='test_loss')
  3. plt.legend()

  

  1. plt.plot(range(1, epochs+1), train_acc, label='train_acc')
  2. plt.plot(range(1, epochs+1), test_acc, label='test_acc')
  3. plt.legend()

四、预训练网络ResNet以及模型微调

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. import torch.optim as optim
  5. import numpy as np
  6. import matplotlib.pyplot as plt
  7. import torchvision
  8. from torchvision import transforms
  9. import os
  10. print("一、RESNET预训练模型:")
  11. base_dir = r'./datasets/4weather'
  12. train_dir = os.path.join(base_dir , 'train')
  13. test_dir = os.path.join(base_dir , 'test')
  14. transform = transforms.Compose([
  15. transforms.Resize((192, 192)),
  16. transforms.ToTensor(),
  17. transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
  18. ])
  19. train_ds = torchvision.datasets.ImageFolder(
  20. train_dir,
  21. transform=transform
  22. )
  23. test_ds = torchvision.datasets.ImageFolder(
  24. test_dir,
  25. transform=transform
  26. )
  27. BTACH_SIZE = 32
  28. train_dl = torch.utils.data.DataLoader(
  29. train_ds,
  30. batch_size=BTACH_SIZE,
  31. shuffle=True
  32. )
  33. test_dl = torch.utils.data.DataLoader(
  34. test_ds,
  35. batch_size=BTACH_SIZE,
  36. )
  37. model = torchvision.models.resnet18(pretrained=True)
  38. print(model)
  39. for p in model.parameters():
  40. p.requires_grad = False # 里面所有参数都是不可训练的
  41. # 我们只需将最后的线性层的out_features置换成我们分类的类别即可。可将最后的linear层替换成我们自己的linear层,我们自己的
  42. # linear层的requires_grad默认是True。
  43. in_f = model.fc.in_features
  44. model.fc = nn.Linear(in_f,4)
  45. # print("\nafter changed model:\n",model)
  46. # 将模型转移到GPU上
  47. if torch.cuda.is_available():
  48. model.to('cuda')
  49. # 创建优化器
  50. optimizer = torch.optim.Adam(model.fc.parameters(),lr = 0.001)
  51. # 定义损失函数
  52. loss_fn = nn.CrossEntropyLoss()
  53. def fit(epoch, model, trainloader, testloader):
  54. correct = 0
  55. total = 0
  56. running_loss = 0
  57. model.train() # 模型里面有Bn层,表现在训练时和预测时不一样。
  58. for x, y in trainloader:
  59. if torch.cuda.is_available():
  60. x, y = x.to('cuda'), y.to('cuda')
  61. y_pred = model(x)
  62. loss = loss_fn(y_pred, y)
  63. optimizer.zero_grad()
  64. loss.backward()
  65. optimizer.step()
  66. with torch.no_grad():
  67. y_pred = torch.argmax(y_pred, dim=1)
  68. correct += (y_pred == y).sum().item()
  69. total += y.size(0)
  70. running_loss += loss.item()
  71. # exp_lr_scheduler.step()
  72. epoch_loss = running_loss / len(trainloader.dataset)
  73. epoch_acc = correct / total
  74. test_correct = 0
  75. test_total = 0
  76. test_running_loss = 0
  77. model.eval()
  78. with torch.no_grad():
  79. for x, y in testloader:
  80. if torch.cuda.is_available():
  81. x, y = x.to('cuda'), y.to('cuda')
  82. y_pred = model(x)
  83. loss = loss_fn(y_pred, y)
  84. y_pred = torch.argmax(y_pred, dim=1)
  85. test_correct += (y_pred == y).sum().item()
  86. test_total += y.size(0)
  87. test_running_loss += loss.item()
  88. epoch_test_loss = test_running_loss / len(testloader.dataset)
  89. epoch_test_acc = test_correct / test_total
  90. print('epoch: ', epoch,
  91. 'loss: ', round(epoch_loss, 3),
  92. 'accuracy:', round(epoch_acc, 3),
  93. 'test_loss: ', round(epoch_test_loss, 3),
  94. 'test_accuracy:', round(epoch_test_acc, 3)
  95. )
  96. return epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc
  97. epochs = 50
  98. train_loss = []
  99. train_acc = []
  100. test_loss = []
  101. test_acc = []
  102. for epoch in range(epochs):
  103. epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc = fit(epoch,
  104. model,
  105. train_dl,
  106. test_dl)
  107. train_loss.append(epoch_loss)
  108. train_acc.append(epoch_acc)
  109. test_loss.append(epoch_test_loss)
  110. test_acc.append(epoch_test_acc)

 模型微调

所谓微调:就是共同训练新添加的分类器层和部分或者全部卷积层。这允许我们“微调”基础模型中的高阶特征表示,以使它们与特定任务更相关。

只有分类器已经训练好了,才能微调卷积基的卷积层。如果有没有这样的话,刚开始的训练误差很大,微调之前这些卷积层学到的表示会被破坏掉。

微调步骤:

  • 1、在预训练卷积基上添加自定义层
  • 2、冻结卷积基所有层
  • 3、训练添加的分类层
  • 4、解冻卷积基的一部分层
  • 5、联合训练解冻的卷积层和添加的自定义层
    1. import torch
    2. import torch.nn as nn
    3. import torch.nn.functional as F
    4. import torch.optim as optim
    5. import numpy as np
    6. import matplotlib.pyplot as plt
    7. import torchvision
    8. from torchvision import transforms
    9. import os
    10. base_dir = r'./datasets/4weather'
    11. train_dir = os.path.join(base_dir , 'train')
    12. test_dir = os.path.join(base_dir , 'test')
    13. train_transform = transforms.Compose([
    14. transforms.Resize(224),
    15. transforms.RandomResizedCrop(192, scale=(0.6,1.0), ratio=(0.8,1.0)),
    16. transforms.RandomHorizontalFlip(),
    17. transforms.RandomRotation(0.2),
    18. torchvision.transforms.ColorJitter(brightness=0.5, contrast=0, saturation=0, hue=0),
    19. torchvision.transforms.ColorJitter(brightness=0, contrast=0.5, saturation=0, hue=0),
    20. transforms.ToTensor(),
    21. transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
    22. ])
    23. test_transform = transforms.Compose([
    24. transforms.Resize((192, 192)),
    25. transforms.ToTensor(),
    26. transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
    27. ])
    28. train_ds = torchvision.datasets.ImageFolder(
    29. train_dir,
    30. transform=train_transform
    31. )
    32. test_ds = torchvision.datasets.ImageFolder(
    33. test_dir,
    34. transform=test_transform
    35. )
    36. BTACH_SIZE = 32
    37. train_dl = torch.utils.data.DataLoader(
    38. train_ds,
    39. batch_size=BTACH_SIZE,
    40. shuffle=True
    41. )
    42. test_dl = torch.utils.data.DataLoader(
    43. test_ds,
    44. batch_size=BTACH_SIZE,
    45. )
    46. model = torchvision.models.resnet101(pretrained=True)
    47. for param in model.parameters():
    48. param.requires_grad = False
    49. in_f = model.fc.in_features
    50. model.fc = nn.Linear(in_f, 4)
    51. if torch.cuda.is_available():
    52. model.to('cuda')
    53. loss_fn = nn.CrossEntropyLoss()
    54. optimizer = torch.optim.Adam(model.fc.parameters(), lr=0.001)
    55. from torch.optim import lr_scheduler
    56. exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.5)
    57. def fit(epoch, model, trainloader, testloader):
    58. correct = 0
    59. total = 0
    60. running_loss = 0
    61. model.train()
    62. for x, y in trainloader:
    63. if torch.cuda.is_available():
    64. x, y = x.to('cuda'), y.to('cuda')
    65. y_pred = model(x)
    66. loss = loss_fn(y_pred, y)
    67. optimizer.zero_grad()
    68. loss.backward()
    69. optimizer.step()
    70. with torch.no_grad():
    71. y_pred = torch.argmax(y_pred, dim=1)
    72. correct += (y_pred == y).sum().item()
    73. total += y.size(0)
    74. running_loss += loss.item()
    75. exp_lr_scheduler.step()
    76. epoch_loss = running_loss / len(trainloader.dataset)
    77. epoch_acc = correct / total
    78. test_correct = 0
    79. test_total = 0
    80. test_running_loss = 0
    81. model.eval()
    82. with torch.no_grad():
    83. for x, y in testloader:
    84. if torch.cuda.is_available():
    85. x, y = x.to('cuda'), y.to('cuda')
    86. y_pred = model(x)
    87. loss = loss_fn(y_pred, y)
    88. y_pred = torch.argmax(y_pred, dim=1)
    89. test_correct += (y_pred == y).sum().item()
    90. test_total += y.size(0)
    91. test_running_loss += loss.item()
    92. epoch_test_loss = test_running_loss / len(testloader.dataset)
    93. epoch_test_acc = test_correct / test_total
    94. print('epoch:', epoch,
    95. ',loss:', round(epoch_loss, 3),
    96. ',accuracy:', round(epoch_acc, 3),
    97. ',test_loss:', round(epoch_test_loss, 3),
    98. ',test_accuracy:', round(epoch_test_acc, 3)
    99. )
    100. return epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc
    101. epochs = 30
    102. train_loss = []
    103. train_acc = []
    104. test_loss = []
    105. test_acc = []
    106. for epoch in range(epochs):
    107. epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc = fit(epoch,
    108. model,
    109. train_dl,
    110. test_dl)
    111. train_loss.append(epoch_loss)
    112. train_acc.append(epoch_acc)
    113. test_loss.append(epoch_test_loss)
    114. test_acc.append(epoch_test_acc)
    115. """微调。先将Linear层训练好之后才训练卷积层,这才是微调。"""
    116. for param in model.parameters(): # 解冻卷积部分
    117. param.requires_grad = True
    118. extend_epochs = 30
    119. from torch.optim import lr_scheduler
    120. optimizer = torch.optim.Adam(model.parameters(), lr=0.0001) # 优化模型的全部参数。微调时,训练的速率要小一些
    121. exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.5)
    122. for epoch in range(extend_epochs):
    123. epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc = fit(epoch,
    124. model,
    125. train_dl,
    126. test_dl)
    127. train_loss.append(epoch_loss)
    128. train_acc.append(epoch_acc)
    129. test_loss.append(epoch_test_loss)
    130. test_acc.append(epoch_test_acc)
    131. plt.figure(0)
    132. plt.plot(range(1, len(train_loss)+1), train_loss, label='train_loss')
    133. plt.plot(range(1, len(train_loss)+1), test_loss, label='test_loss')
    134. plt.legend()
    135. plt.show()
    136. plt.figure(1)
    137. plt.plot(range(1, len(train_loss)+1), train_acc, label='train_acc')
    138. plt.plot(range(1, len(train_loss)+1), test_acc, label='test_acc')
    139. plt.legend()
    140. plt.show()

    五、模型权重保存

  • 模型当作的权重(可训练参数)保存下来,训练前后,model的可训练参数发生变化。state_dict就是一个简单的Python字典,它将模型中的可训练参数(比如weights和biases,batchnorm的running_mean、torch.optim参数等)通过将模型每层与层的参数张量之间一一映射, 实现保存、更新、变化和再存储。

    1. import torch
    2. import torch.nn as nn
    3. import torch.nn.functional as F
    4. import torch.optim as optim
    5. import numpy as np
    6. import matplotlib.pyplot as plt
    7. import torchvision
    8. import os
    9. import copy
    10. from torchvision import transforms
    11. base_dir = r'./datasets/4weather'
    12. train_dir = os.path.join(base_dir , 'train')
    13. test_dir = os.path.join(base_dir , 'test')
    14. transform = transforms.Compose([
    15. transforms.Resize((96, 96)),
    16. transforms.ToTensor(),
    17. transforms.Normalize(mean=[0.5, 0.5, 0.5],
    18. std=[0.5, 0.5, 0.5])
    19. ])
    20. train_ds = torchvision.datasets.ImageFolder(
    21. train_dir,
    22. transform=transform
    23. )
    24. test_ds = torchvision.datasets.ImageFolder(
    25. test_dir,
    26. transform=transform
    27. )
    28. BATCHSIZE = 16
    29. train_dl = torch.utils.data.DataLoader(
    30. train_ds,
    31. batch_size=BATCHSIZE,
    32. shuffle=True
    33. )
    34. test_dl = torch.utils.data.DataLoader(
    35. test_ds,
    36. batch_size=BATCHSIZE,
    37. )
    38. class Net(nn.Module):
    39. def __init__(self):
    40. super(Net, self).__init__()
    41. self.conv1 = nn.Conv2d(3, 16, 3)
    42. self.pool = nn.MaxPool2d(2, 2)
    43. self.conv2 = nn.Conv2d(16, 32, 3)
    44. self.conv3 = nn.Conv2d(32, 64, 3)
    45. self.fc1 = nn.Linear(64*10*10, 1024)
    46. self.fc2 = nn.Linear(1024, 256)
    47. self.fc3 = nn.Linear(256, 4)
    48. def forward(self, x):
    49. x = self.pool(F.relu(self.conv1(x)))
    50. x = self.pool(F.relu(self.conv2(x)))
    51. x = self.pool(F.relu(self.conv3(x)))
    52. x = x.view(-1, 64 * 10 * 10)
    53. x = F.relu(self.fc1(x))
    54. x = F.relu(self.fc2(x))
    55. x = self.fc3(x)
    56. return x
    57. model = Net()
    58. if torch.cuda.is_available():
    59. model.to('cuda')
    60. optim = torch.optim.Adam(model.parameters(), lr=0.001)
    61. loss_fn = nn.CrossEntropyLoss()
    62. def fit(epoch, model, trainloader, testloader):
    63. correct = 0
    64. total = 0
    65. running_loss = 0
    66. model.train()
    67. for x, y in trainloader:
    68. if torch.cuda.is_available():
    69. x, y = x.to('cuda'), y.to('cuda')
    70. y_pred = model(x)
    71. loss = loss_fn(y_pred, y)
    72. optim.zero_grad()
    73. loss.backward()
    74. optim.step()
    75. with torch.no_grad():
    76. y_pred = torch.argmax(y_pred, dim=1)
    77. correct += (y_pred == y).sum().item()
    78. total += y.size(0)
    79. running_loss += loss.item()
    80. epoch_loss = running_loss / len(trainloader.dataset)
    81. epoch_acc = correct / total
    82. test_correct = 0
    83. test_total = 0
    84. test_running_loss = 0
    85. model.eval()
    86. with torch.no_grad():
    87. for x, y in testloader:
    88. if torch.cuda.is_available():
    89. x, y = x.to('cuda'), y.to('cuda')
    90. y_pred = model(x)
    91. loss = loss_fn(y_pred, y)
    92. y_pred = torch.argmax(y_pred, dim=1)
    93. test_correct += (y_pred == y).sum().item()
    94. test_total += y.size(0)
    95. test_running_loss += loss.item()
    96. epoch_test_loss = test_running_loss / len(testloader.dataset)
    97. epoch_test_acc = test_correct / test_total
    98. print('epoch:', epoch,
    99. ',loss:', round(epoch_loss, 3),
    100. ',accuracy:', round(epoch_acc, 3),
    101. ',test_loss:', round(epoch_test_loss, 3),
    102. ',test_accuracy:', round(epoch_test_acc, 3)
    103. )
    104. return epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc
    105. epochs = 10
    106. train_loss = []
    107. train_acc = []
    108. test_loss = []
    109. test_acc = []
    110. for epoch in range(epochs):
    111. epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc = fit(epoch,
    112. model,
    113. train_dl,
    114. test_dl)
    115. train_loss.append(epoch_loss)
    116. train_acc.append(epoch_acc)
    117. test_loss.append(epoch_test_loss)
    118. test_acc.append(epoch_test_acc)
    119. """保存模型"""
    120. # 模型当作的权重(可训练参数)保存下来,训练前后,model的可训练参数发生变化。
    121. # state_dict就是一个简单的Python字典,它将模型中的可训练参数
    122. # (比如weights和biases,batchnorm的running_mean、torch.optim参数等)通过将模型每层与层的参数张量之间一一映射,
    123. # 实现保存、更新、变化和再存储。
    124. model.state_dict() # 这个方法会返回模型可训练参数的当前状态,返回的是python字典
    125. PATH = './my_net.pth' # 将模型的参数保存的位置
    126. torch.save(model.state_dict(), PATH) # 当前模型权重参数
    127. # 如何去加载刚刚保存好的模型呢?现在保存的只是模型的参数,我们需要定义模型本身(重新初始化模型)
    128. # 我们现在的保存模型参数的方法,需要将模型代码,类,保存起来。
    129. # 重新初始化类,得到一个新的模型
    130. new_model = Net() # 新的模型的参数是随机初始化的参数
    131. new_model.load_state_dict(torch.load(PATH)) # 将参数加载进模型
    132. new_model.to('cuda')
    133. test_correct = 0
    134. test_total = 0
    135. new_model.eval()
    136. with torch.no_grad():
    137. for x, y in test_dl:
    138. if torch.cuda.is_available():
    139. x, y = x.to('cuda'), y.to('cuda')
    140. y_pred = new_model(x)
    141. y_pred = torch.argmax(y_pred, dim=1)
    142. test_correct += (y_pred == y).sum().item()
    143. test_total += y.size(0)
    144. epoch_test_acc = test_correct / test_total
    145. print(epoch_test_acc)
    146. """训练函数保存最优参数。在训练的epoch上,保存最优参数的那一次epoch"""
    147. best_model_wts = copy.deepcopy(model.state_dict()) # 深拷贝
    148. best_acc = 0.0
    149. train_loss = []
    150. train_acc = []
    151. test_loss = []
    152. test_acc = []
    153. for epoch in range(epochs):
    154. epoch_loss, epoch_acc, epoch_test_loss, epoch_test_acc = fit(epoch,
    155. model,
    156. train_dl,
    157. test_dl)
    158. train_loss.append(epoch_loss)
    159. train_acc.append(epoch_acc)
    160. test_loss.append(epoch_test_loss)
    161. test_acc.append(epoch_test_acc)
    162. if epoch_test_acc > best_acc:
    163. best_acc = epoch_acc
    164. best_model_wts = copy.deepcopy(model.state_dict())
    165. model.load_state_dict(best_model_wts)
    166. model.eval()

    完整模型的保存和加载

  • torch.save(model.state_dict(), PATH) 只保存模型参数。
    1. PATH = './my_whole_model.pth'
    2. torch.save(model, PATH)
    3. new_model2 = torch.load(PATH)
    4. new_model2.eval()

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Guff_9hys/article/detail/923382
推荐阅读
相关标签
  

闽ICP备14008679号