赞
踩
Alexnet网络是CV领域最经典的网络结构之一了,在2012年横空出世,并在当年夺下了不少比赛的冠军,下面是Alexnet的网络结构:
网络结构较为简单,共有五个卷积层和三个全连接层,原文作者在训练时使用了多卡一起训练,具体细节可以阅读原文得到。
Alexnet文章链接:http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf
作者在网络中使用了Relu激活函数和Dropout等方法来防止过拟合,更多细节看文章。
使用的是MNIST手写数字识别数据集,torchvision中自带有数据集的下载地址。
就按照网络结构图中一层一层的定义就行,其中第1,2,5层卷积层后面接有Max pooling层和Relu激活函数,五层卷积之后得到图像的特征表示,送入全连接层中进行分类。
# !/usr/bin/python3 # -*- coding:utf-8 -*- # Author:WeiFeng Liu # @Time: 2021/11/2 下午3:25 import torch import torch.nn as nn import torch.nn.functional as F import torchvision.transforms as transforms import torch.optim as optim class AlexNet(nn.Module): def __init__(self,width_mult=1): super(AlexNet,self).__init__() #定义每一个就卷积层 self.layer1 = nn.Sequential( #卷积层 #输入图像为1*28*28 nn.Conv2d(1,32,kernel_size=3,padding=1), #池化层 nn.MaxPool2d(kernel_size=2,stride=2) , #池化层特征图通道数不改变,每个特征图的分辨率变小 #激活函数Relu nn.ReLU(inplace=True), ) self.layer2 = nn.Sequential( nn.Conv2d(32,64,kernel_size=3,padding=1), nn.MaxPool2d(kernel_size=2,stride=2), nn.ReLU(inplace=True), ) self.layer3 = nn.Sequential( nn.Conv2d(64,128,kernel_size=3,padding=1), ) self.layer4 = nn.Sequential( nn.Conv2d(128,256,kernel_size=3,padding=1), ) self.layer5 = nn.Sequential( nn.Conv2d(256,256,kernel_size=3,padding=1), nn.MaxPool2d(kernel_size=3, stride=2), nn.ReLU(inplace=True), ) #定义全连接层 self.fc1 = nn.Linear(256 * 3 * 3,1024) self.fc2 = nn.Linear(1024,512) self.fc3 = nn.Linear(512,10) #对应十个类别的输出 def forward(self,x): x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.layer5(x) x = x.view(-1,256*3*3) x = self.fc1(x) x = self.fc2(x) x = self.fc3(x) return x
# !/usr/bin/python3 # -*- coding:utf-8 -*- # Author:WeiFeng Liu # @Time: 2021/11/2 下午3:38 import torch import torch.nn as nn import torch.nn.functional as F import torchvision import torch.optim as optim import torchvision.transforms as transforms from alexnet import AlexNet import cv2 from utils import plot_image,plot_curve,one_hot #定义使用GPU device = torch.device("cuda" if torch.cuda.is_available() else "cpu") #设置超参数 epochs = 30 batch_size = 256 lr = 0.01 train_loader = torch.utils.data.DataLoader( torchvision.datasets.MNIST('mnist_data',train=True,download=True, transform = torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), #数据归一化 torchvision.transforms.Normalize( (0.1307,),(0.3081,)) ])), batch_size = batch_size,shuffle = True ) test_loader = torch.utils.data.DataLoader( torchvision.datasets.MNIST('mnist_data/',train=False,download=True, transform = torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,),(0.3081,)) ])), batch_size = 256,shuffle = False ) #定义损失函数 criterion = nn.CrossEntropyLoss() #定义网络 net = AlexNet().to(device) #定义优化器 optimzer = optim.SGD(net.parameters(),lr=lr,momentum = 0.9) #train train_loss = [] for epoch in range(epochs): sum_loss = 0.0 for batch_idx,(x,y) in enumerate(train_loader): print(x.shape) x = x.to(device) y = y.to(device) #梯度清零 optimzer.zero_grad() pred = net(x) loss = criterion(pred, y) loss.backward() optimzer.step() train_loss.append(loss.item()) sum_loss += loss.item() if batch_idx % 100 == 99: print('[%d, %d] loss: %.03f' % (epoch + 1, batch_idx + 1, sum_loss / 100)) sum_loss = 0.0 torch.save(net.state_dict(),'/home/lwf/code/pytorch学习/alexnet图像分类/model/model.pth') plot_curve(train_loss)
使用交叉熵损失函数和SGD优化器来训练网络,训练后保存模型至本地。
训练过程中损失函数的收敛过程:
完整代码:https://github.com/SPECTRELWF/pytorch-cnn-study/tree/main/Alexnet-MNIST
个人主页:http://liuweifeng.top:8090/
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。