当前位置:   article > 正文

简单尝试使用迁移学习进行图像分类(VGG、Resnet)_emergency_train.csv

emergency_train.csv

前言

迁移学习指的是保存已有问题的解决模型,并将其利用在其他不同但相关问题上。 比如说,训练用来辨识汽车的模型也可以被用来提升识别卡车的能力。很多情况下迁移学习能够简化或降低模型构建的难度,甚至还能取得不错的准确度。
本文将针对一个小的图片数据集,使用PyTorch进行迁移学习演示,包括如何使用预训练模型,并将结果自己搭建的卷积神经网络模型进行性能比较。

本文在这篇原文的基础上,进行的。数据来源于原文,实际代码做了修改。

数据集介绍

考虑到VGG16要求图像的形状为(3,224,224),即像素为224x224的彩色图像,因为我准备用这个数据集进行实验。所谓的应急车辆包括:警车、消防车和救护车。在数据集中有一个emergency_train.csv,用来存放训练样本的标签。

数据集下载: 百度云下载链接 提取码: quia

实验环境

ubuntu系统:版本16.4

Pytorch:版本

显卡:1660s 6G

选取预训练模型

预训练模型是由某个人或团队为解决特定问题而已经设计和训练好的模型。预训练模型在深度学习项目中非常有用,因为并非所有人都拥有足够多的算力。我们需要使用本地机器,因此预训练的模型就可以节约很多时间。预训练的模型通过将其权重和偏差矩阵传递给新模型来得以共享他们训练好的参数。因此,在进行迁移学习之前,我要首先选择一个合适的预训练模型,然后将其权重和偏差矩阵传递给新模型。针对不同的深度学习任务可能有很多预训练模型可用,现在针对我要做的这个任务确定哪种模型最适合,根据我们的数据集介绍,我会选择VGG16在ImageNet上的预训练模型,而不是在MNIST上的预训练模型,因为我们的数据集中包含车辆图像,ImageNet中具有丰富的车辆图像,因此前者应该更为合理。总之,选择预训练模型时不是考虑参数量和性能表现,而是考虑任务间的相关性以及数据集的相似程度。

导入依赖库

# 导入数据分析的工具
import pandas as pd
import numpy as np
from tqdm import tqdm
from sklearn.model_selection import train_test_split

# 导入读取和展示图片的工具
from skimage import io
from skimage.transform import resize
import matplotlib.pyplot as plt
from torchvision import transforms

# 导入模型构建的工具
import torch
from torch.utils.data import DataLoader
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import Adam
from sklearn import metrics

# 导入迁移学习的工具
from torchvision import models
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

数据处理

数据和标签文件

读取包含图像名称和相应标签的emergency_train.csv文件,并查看内容:

def read_info():
    df = pd.read_csv("data/emergency/emergency_train.csv")
    return df

df = read_info()
  • 1
  • 2
  • 3
  • 4
  • 5

该csv文件中包含两列:

  • image_names: 代表数据集中所有图像的名称
  • Emergency_or_no: 指定特定图像属于紧急类别还是非紧急类别。0表示图像是非紧急车辆,1表示紧急车辆

加载图像

# 加载训练图像
def read_images(df):
    images = []
    for img_name in tqdm(df.image_names.values):
        # defining the image path
        img_path = 'data/emergency/images/' + img_name
        # reading the image
        img = io.imread(img_path)
        # appending the image into the list
        images.append(img)
    return images

images = read_images(df)
print("Total images number: ", len(images))
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

可以看到,这个数据集中一共包括了1646张图片.

Total images number: 1646

数据集拆分

我们将数据集拆分为训练集测试集, 其中测试集占比10%.

def data_split(images, df):
    x = images
    y = df.emergency_or_not.values
    train_x, val_x, train_y, val_y = train_test_split(x, 
                                                      y, 
                                                      test_size=0.1, 
                                                      random_state= 13,
                                                      stratify=y)
    return train_x, val_x, train_y, val_y

train_x, val_x, train_y, val_y = data_split(images, df)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

数据变换和格式转换

我们对图像数据尺寸变换、随机裁剪等。由于后续需要用到VGG、Resnet预训练模型,这些模型对输入数据尺寸要为为3x224x244,因此,我们将数据也转换为该尺寸大小。

# 用于图像转换
data_transforms = {'train': transforms.Compose([
                                transforms.ToTensor(),  # 因为我们用skimage读入图片,图片数据为np.array格式。因此,需要首先转换为tensor。
                                transforms.RandomResizedCrop(224), # 裁剪224 x 224 的尺寸
                                transforms.RandomHorizontalFlip(),
                                transforms.Normalize([0.485, 0.456, 0.406],
                                                     [0.229, 0.224, 0.225]) # 对每个通道做normalize,条件与VGG预训练模型一致。
                            ]),
                   'val': transforms.Compose([
                                transforms.ToTensor(),
                                transforms.Resize(226),
                                transforms.CenterCrop(224),  # 裁剪中间的224x224区域
                                transforms.Normalize([0.485, 0.456, 0.406],
                                                     [0.229, 0.224, 0.225])
                                             ])
                  }

def data_convert(train_x, train_y, val_x, val_y, data_transforms):
    # defining train dataset, test dataset
    train_dataset, val_dataset = [], []
    # obtain train dataset
    for idx, data in enumerate(train_x):
        data = data_transforms['train'](data)
#         data = data.permute(2,0,1)
        label = torch.tensor(train_y[idx], dtype=torch.long)
        train_dataset.append((data, label))
    # obtain val dataset
    for idx, data in enumerate(val_x):
        data = data_transforms['val'](data)
#         data = data.permute(2,0,1)
        label = torch.tensor(val_y[idx], dtype=torch.long)
        val_dataset.append((data, label))
    # 返回dataset格式
    return train_dataset, val_dataset

train_dataset, val_dataset = data_convert(train_x, train_y, val_x, val_y, data_transforms)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36

采坑点

skimage读入图片的形式为 (H, W, C) 。transforms.ToTensor()会自动将形式转换为 (C, H, W)。此外,如果skimage读入的数据为dtype = np.uint8类型,该函数还会将数据归一化到[0-1] 参考

自定义卷积神经网络模型

我们自定义一个卷积神经网络模型,用该数据集进行训练。得到的模型结果作为baseline,与迁移学习的模型进行比较。

模型定义如下:

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        
        self.cnn_layers = nn.Sequential(
            # defining a 2D convolution layer
            nn.Conv2d(in_channels=3, out_channels=4, kernel_size=3, stride=1, padding=1),
            nn.BatchNorm2d(4),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2),
            # define another 2D convolution layer
            nn.Conv2d(in_channels=4, out_channels=8, kernel_size=3, stride=1, padding=1),
            nn.BatchNorm2d(8),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2),
            )
        
        self.linear_layer = nn.Sequential(
            nn.Flatten(),
            nn.Linear(8*56*56, 2)
            )
        
    def forward(self, x):
        x = self.cnn_layers(x)
        x = self.linear_layer(x)
        return x
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

模型训练如下:

def train(epochs=50, batch_size=32):
    # obtain dataloader
    train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
    val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=True)
    
    # defining the model
    model = Net()
    
    # defining the optimizer
    optimizer = Adam(model.parameters(), lr=0.0001)
    
    # defining the loss function
    criterion = nn.CrossEntropyLoss()
    
    # checking if GPU is availabel
    if torch.cuda.is_available():
        model = model.to("cuda")
        criterion = criterion.to("cuda")
        
    # training
    for epoch in range(1, epochs + 1):
        model.train()
        training_loss = []
        real_labels, pred_labels = [], []
        for data, label in train_loader:
            real_labels += label.detach().tolist()
            data = data.to('cuda')
            label = label.to('cuda')
            pred = model(data)
            loss = criterion(pred, label)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            # record batch loss
            training_loss.append(loss.item())
            # for evaluation
            out = F.softmax(pred, dim=1).cpu().detach().numpy()
            pred_label = np.argmax(out, 1)
            pred_labels += list(pred_label)
            
        # evaluation for train    
        train_acc = metrics.accuracy_score(real_labels, pred_labels)
        train_loss = np.mean(training_loss)
        
        # evaluation for val
        val_loss = []
        real_labels, pred_labels = [], []
        model.eval()
        with torch.no_grad():
            for data, label in val_loader:
                real_labels += label.detach().tolist()
                data = data.to('cuda')
                label = label.to('cuda')
                pred = model(data)
                loss = criterion(pred, label)
                val_loss.append(loss.item())
                out = F.softmax(pred, dim=1).cpu().detach().numpy()
                pred_label = np.argmax(out, 1)
                pred_labels += list(pred_label)
        val_acc = metrics.accuracy_score(real_labels, pred_labels)
        val_loss = np.mean(val_loss)
        
        # print log
        print("Epoch {:3d} | training loss {:5f} | training acc {:5f} | val loss {:5f} | val acc {:5f}".format(
            epoch, train_loss, train_acc, val_loss, val_acc))
    
    return model

model = train()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69

模型训练结果如下:

Epoch 48 | training loss 0.072396 | training acc 0.997299 | val loss 0.833294 | val acc 0.678788
Epoch 49 | training loss 0.069730 | training acc 0.998650 | val loss 0.634375 | val acc 0.666667
Epoch 50 | training loss 0.069672 | training acc 0.997974 | val loss 0.728107 | val acc 0.666667

使用迁移学习构建模型(VGG)

模型构建如下所示:

# loading the pretrained model
model = models.vgg16_bn(pretrained=True)

# Freeze model weights
for param in model.parameters():
    param.requires_grad = False

# checking if GPU is available
if torch.cuda.is_available():
    model = model.cuda()

# repalce last layer with new layer
model.classifier[6] = nn.Sequential(
                        nn.Linear(4096, 2)
                        )

# set last layer
for param in model.classifier[6].parameters():
    param.requires_grad = True
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

模型训练如下所示:

def train(model, epochs=50, batch_size=32):
    # obtain dataloader
    train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
    val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=True)
        
    # defining the optimizer
    optimizer = Adam(model.parameters(), lr=0.0001)
    
    # defining the loss function
    criterion = nn.CrossEntropyLoss()
    
    # checking if GPU is availabel
    if torch.cuda.is_available():
        model = model.to("cuda")
        criterion = criterion.to("cuda")
        
    # training
    for epoch in range(1, epochs + 1):
        training_loss = []
        real_labels, pred_labels = [], []
        for data, label in train_loader:
            real_labels += label.detach().tolist()
            data = data.to('cuda')
            label = label.to('cuda')
            pred = model(data)
            loss = criterion(pred, label)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            # record batch loss
            training_loss.append(loss.item())
            # for evaluation
            out = F.softmax(pred, dim=1).cpu().detach().numpy()
            pred_label = np.argmax(out, 1)
            pred_labels += list(pred_label)
            
        # evaluation for train    
        train_acc = metrics.accuracy_score(real_labels, pred_labels)
        train_loss = np.mean(training_loss)
        
        # evaluation for val
        val_loss = []
        real_labels, pred_labels = [], []
        model.eval()
        with torch.no_grad():
            for data, label in val_loader:
                real_labels += label.detach().tolist()
                data = data.to('cuda')
                label = label.to('cuda')
                pred = model(data)
                loss = criterion(pred, label)
                val_loss.append(loss.item())
                out = F.softmax(pred, dim=1).cpu().detach().numpy()
                pred_label = np.argmax(out, 1)
                pred_labels += list(pred_label)
        val_acc = metrics.accuracy_score(real_labels, pred_labels)
        val_loss = np.mean(val_loss)
        
        # print log
        print("Epoch {:3d} | training loss {:5f} | training acc {:5f} | val loss {:5f} | val acc {:5f}".format(
            epoch, train_loss, train_acc, val_loss, val_acc))
    
    return model

model = train(model=model)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65

模型训练结果如下:

Epoch 48 | training loss 0.136659 | training acc 0.958136 | val loss 0.160370 | val acc 0.951515
Epoch 49 | training loss 0.134608 | training acc 0.958136 | val loss 0.233730 | val acc 0.951515
Epoch 50 | training loss 0.135072 | training acc 0.954760 | val loss 0.159174 | val acc 0.951515

使用迁移学习构建模型(Resnet)

模型构建如下所示:

model = models.resnet18(pretrained=True)

# Freeze model weights
for param in model.parameters():
    param.requires_grad = False

# checking if GPU is available
if torch.cuda.is_available():
    model = model.cuda()

# repalce last layer with new layer
model.fc = nn.Sequential(
                        nn.Linear(512, 2)
                        )

# set last layer
for param in model.fc.parameters():
    param.requires_grad = True
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

模型训练如下所示:

def train(model, epochs=50, batch_size=32):
    # obtain dataloader
    train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
    val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=True)
        
    # defining the optimizer
    optimizer = Adam(model.parameters(), lr=0.0001)
    
    # defining the loss function
    criterion = nn.CrossEntropyLoss()
    
    # checking if GPU is availabel
    if torch.cuda.is_available():
        model = model.to("cuda")
        criterion = criterion.to("cuda")
        
    # training
    for epoch in range(1, epochs + 1):
        model.train()
        training_loss = []
        real_labels, pred_labels = [], []
        for data, label in train_loader:
            real_labels += label.detach().tolist()
            data = data.to('cuda')
            label = label.to('cuda')
            pred = model(data)
            loss = criterion(pred, label)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            # record batch loss
            training_loss.append(loss.item())
            # for evaluation
            out = F.softmax(pred, dim=1).cpu().detach().numpy()
            pred_label = np.argmax(out, 1)
            pred_labels += list(pred_label)
            
        # evaluation for train    
        train_acc = metrics.accuracy_score(real_labels, pred_labels)
        train_loss = np.mean(training_loss)
        
        # evaluation for val
        val_loss = []
        real_labels, pred_labels = [], []
        model.eval()
        with torch.no_grad():
            for data, label in val_loader:
                real_labels += label.detach().tolist()
                data = data.to('cuda')
                label = label.to('cuda')
                pred = model(data)
                loss = criterion(pred, label)
                val_loss.append(loss.item())
                out = F.softmax(pred, dim=1).cpu().detach().numpy()
                pred_label = np.argmax(out, 1)
                pred_labels += list(pred_label)
        val_acc = metrics.accuracy_score(real_labels, pred_labels)
        val_loss = np.mean(val_loss)
        
        # print log
        print("Epoch {:3d} | training loss {:5f} | training acc {:5f} | val loss {:5f} | val acc {:5f}".format(
            epoch, train_loss, train_acc, val_loss, val_acc))
    
    return model

model = train(model=model)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66

模型结果如下所示:

Epoch 48 | training loss 0.218535 | training acc 0.916948 | val loss 0.210073 | val acc 0.921212
Epoch 49 | training loss 0.216172 | training acc 0.916948 | val loss 0.217664 | val acc 0.927273
Epoch 50 | training loss 0.215871 | training acc 0.914247 | val loss 0.210817 | val acc 0.921212

总结

在这个案例中,使用迁移学习能够明显提高模型性能。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/菜鸟追梦旅行/article/detail/289181
推荐阅读
相关标签
  

闽ICP备14008679号