当前位置:   article > 正文

pytorch-神经网络_in_channel=3, out_channel=9

in_channel=3, out_channel=9

目录

目录

基本骨架

卷积操作

卷积层

最大池化

非线性激活

线性层

小实战

损失函数与反向传播

优化器

修改现有网络模型

网络保存加载

模型训练套路

GPU训练

完整代码

模型验证

tips:


基本骨架

 所有神经网络模块的基类。  您的模型还应子类化此类。  模块还可以包含其他模块,允许将它们嵌套在树结构中。您可以将子模块指定为常规属性:

 module相当于模板,写自己的函数时,在module中写入自己的函数,所有神经网络模块的基类。

  1. import torch
  2. from torch import nn
  3. #定义神经网络模板
  4. class abc(nn.Module):
  5. def __init__(self) -> None:
  6. super().__init__()
  7. def forward(self,input):
  8. output = input+1
  9. return output
  10. #这个神经网络很简单,给一个input然后输出仅加一
  11. #创建神经网络
  12. a = abc()
  13. x = torch.tensor(1.0)
  14. output = a(x) #将x输入神经网络
  15. print(output)

卷积操作

二维卷积:

  1. import torch
  2. import torch.nn.functional as F
  3. input = torch.tensor([[1,2,0,3,1],
  4. [0,1,2,3,1],
  5. [1,2,1,0,0],
  6. [5,2,3,1,1],
  7. [2,1,0,1,1]])
  8. #卷积核
  9. kernel = torch.tensor([[1,2,1],
  10. [0,1,0],
  11. [2,1,0]])
  12. print(input.shape)
  13. print(kernel.shape)
  14. #发现是两参,但是input和weight,需要传入4个参数,需要升维
  15. input = torch.reshape(input,(1,1,5,5))
  16. kernel = torch.reshape(kernel,(1,1,3,3)) #batch_size,channel,H,W
  17. print(input.shape)
  18. print(kernel.shape)
  19. output = F.conv2d(input,kernel,stride=1) #横竖1为步进
  20. print(output)
  21. output2 = F.conv2d(input,kernel,stride=2) ##横竖2为步进
  22. print(output2)
  23. output3 = F.conv2d(input,kernel,stride=1,padding=1) #padding
  24. print(output3)

卷积层

  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from torch.nn import Conv2d
  5. from torch.utils.data import DataLoader
  6. dataset = torchvision.datasets.CIFAR10("dataset",train=False,transform=torchvision.transforms.ToTensor(),
  7. download=True)
  8. dataloader =DataLoader(dataset,batch_size=64)
  9. class ABC(nn.Module):
  10. def __init__(self):
  11. super(ABC, self).__init__()
  12. self.conv1 = Conv2d(in_channels=3,out_channels=6,kernel_size=3,stride=1,padding=0)
  13. def forward(self,x):
  14. x = self.conv1(x)
  15. return x
  16. abc = ABC()
  17. print(abc)
  18. for data in dataloader:
  19. imgs,targets = data
  20. output = abc(imgs)
  21. print(imgs.shape)
  22. print(output.shape)

神经网络名:ABC,它其中一个卷积层名conv1,输入in_channel = 3,out_channel = 6,kernel设置的是3,但是这里拆解的是3×3,为正方形的3×3的卷积核,stride为横向纵向走1步。

in_channel的时候batch_size= 64,因为加载的时候是64,in_channel = 3,产生32×32大小的图像,经过卷积之后变成了6个channel,但是因为经过卷积之后原始图像变小了,变成了30×30.

将图片展示出来

  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from torch.nn import Conv2d
  5. from torch.utils.data import DataLoader
  6. from torch.utils.tensorboard import SummaryWriter
  7. dataset = torchvision.datasets.CIFAR10("dataset",train=False,transform=torchvision.transforms.ToTensor(),
  8. download=True)
  9. dataloader =DataLoader(dataset,batch_size=64)
  10. class ABC(nn.Module):
  11. def __init__(self):
  12. super(ABC, self).__init__()
  13. self.conv1 = Conv2d(in_channels=3,out_channels=6,kernel_size=3,stride=1,padding=0)
  14. def forward(self,x):
  15. x = self.conv1(x)
  16. return x
  17. abc = ABC()
  18. print(abc)
  19. writer = SummaryWriter("logsa")
  20. step = 0
  21. for data in dataloader:
  22. imgs,targets = data
  23. output = abc(imgs)
  24. # torch.Size([64, 3, 32, 32])
  25. writer.add_images("input",imgs,step,)
  26. # torch.Size([64, 6, 30, 30]),但是add_images要求的是3个通道,这里是6个通道
  27. output = torch.reshape(output,(-1,3,30,30))
  28. writer.add_images("output",output,step)
  29. step = step+1
  30. writer.close()

 

最大池化

最大池化的目的在于保留原特征的同时减少神经网络训练的参数,使得训练时间减少。相当于1080p的视频变为了720p,神经网络必不可缺的,训练量大大减少,通常先卷积-池化-非线性激活

池化函数采用某一位置的相邻输出的总体统计特征来代替网络在该位置下的输出,本质是 降采样
使用池化层来缩减模型的大小,提高计算速度,同时提高所提取特征的鲁棒性。

kernel_size:设置取最大值的窗口,类似于卷积层的卷积核,如果传入参数是一个int型,则生成一个正方形,边长与参数相同;若是两个int型的元组,则生成长方形。

stride:步径,与卷积层不同,默认值是kernel_size的大小。

padding:和卷积层一样,用法类似于kernel_size。

dilation:控制窗口中元素步幅的参数,就是两两元素之间有间隔。

ceil_mode:有两种模式,为true和false的时候,true的时候,会取最大值,false的时候,如果没有满足kernel_size的大小,就不会进行保留。(下图的步长为3)ceil向上取整,floor向下取整

  1. input = torch.tensor([[1, 2, 0, 3, 1],
  2. [0, 1, 2, 3, 1],
  3. [1, 2, 1, 0, 0],
  4. [5, 2, 3, 1, 1],
  5. [2, 1, 0, 1, 1]])

 input需要有四个参数,batch_size、channel、输入的高、输入的宽,则设置:设置-1,代表自己计算batch_size,因为是1层,所以是1,5×5的数据所以是5,5,运行

  1. input = torch.reshape(input, [-1, 1, 5, 5])
  2. print(input.shape)
  3. #torch.Size([1,1,5,5]),一个batch_size,一个channel,一个5×5,满足输入的要求

 创建一个神经网络

  1. class Model(nn.Module):
  2. def __init__(self):
  3. super(Model, self).__init__()
  4. self.maxpool1 = nn.MaxPool2d(kernel_size=3, ceil_mode=True)
  5. def forward(self, input):
  6. return self.maxpool1(input)

运行网络:发现最大池化不支持long类型(长整数)所以要在input后面 dtype=torch.float32

  1. a = Model()
  2. output = a(input)
  3. print(output)

 可视化:

  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from torch.nn import Conv2d, MaxPool2d
  5. from torch.utils.data import DataLoader
  6. from torch.utils.tensorboard import SummaryWriter
  7. dataset = torchvision.datasets.CIFAR10("dataset",train=False,transform=torchvision.transforms.ToTensor(),
  8. download=True) #训练数据集,变化tensor,下载
  9. dataloader =DataLoader(dataset,batch_size=64)
  10. class ABC(nn.Module):
  11. def __init__(self):
  12. super(ABC, self).__init__()
  13. self.maxpool1 = MaxPool2d(kernel_size=3,ceil_mode=True)
  14. def forward(self,x):
  15. x = self.maxpool1(x)
  16. return x
  17. abc = ABC()
  18. print(abc)
  19. writer = SummaryWriter("logs_maxpool")
  20. step = 0
  21. for data in dataloader:
  22. imgs,targets = data
  23. output = abc(imgs)
  24. writer.add_images("input",imgs,step)
  25. writer.add_images("output",output,step)
  26. step = step+1
  27. writer.close()

 

非线性激活

为神经网络引入非线性特质,常见ReLU,SIGMOID,非线性越多,才能训练出符合各种特征的模型

 运行以下代码,我们可以看到,数据就被截断了

  1. import torch
  2. from torch import nn
  3. from torch.nn import ReLU
  4. input = torch.tensor([[1,-0.5],[-1,3]])
  5. output = torch.reshape(input,(-1,1,2,2)) #batch_size,channel,H,W
  6. print(output.shape)
  7. class ABC(nn.Module):
  8. def __init__(self):
  9. super(ABC, self).__init__()
  10. self.relu1 = ReLU()
  11. def forward(self,input):
  12. output = self.relu1(input)
  13. return output
  14. abc = ABC()
  15. output = abc(input)
  16. print(output)
  17. #tensor([[1., 0.],[0., 3.]])

 sigmoid函数

  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from torch.nn import ReLU, Sigmoid
  5. from torch.utils.data import DataLoader
  6. from torch.utils.tensorboard import SummaryWriter
  7. input = torch.tensor([[1,-0.5],[-1,3]])
  8. output = torch.reshape(input,(-1,1,2,2))
  9. print(output.shape)
  10. dataset = torchvision.datasets.CIFAR10("data",train=False,download=True,transform=torchvision.transforms.ToTensor())
  11. dataloader = DataLoader(dataset,batch_size = 64)
  12. class ABC(nn.Module):
  13. def __init__(self):
  14. super(ABC, self).__init__()
  15. self.relu1 = ReLU()
  16. self.sigmoid1 = Sigmoid()
  17. def forward(self,input):
  18. output = self.sigmoid1(input)
  19. return output
  20. abc = ABC()
  21. writer = SummaryWriter("uio")
  22. step = 0
  23. for data in dataloader:
  24. imgs,targets = data
  25. writer.add_images("input",imgs,global_step=step)
  26. output = abc(imgs)
  27. writer.add_images("output",output,step)
  28. step += 1
  29. writer.close()

线性层

 bias设置为TRUE的时候有 b

小实战

最大池化不改变channel数 

第一步卷积,in_channel是3,out_channel是32,卷积核大小是5,计算一下padding和striding,最后推导得到结果padding = 2,striding = 1

self.conv1 = Conv2d(3,32,5,padding=2)

 池化部分:

self.maxpool1 = MaxPool2d(kernel_size=2)

展开:

self.flatten = Flatten()

 flatten展开后有64×4×4个数据

然后1024就是线性层的in_feature,64就是out_feature,后面还有一个线性层in_feature是64,out_feature是10,为什么是10,因为CIFAIR10有10个类别图片

 线性层

  1. self.linear1 = Linear(1024,64)
  2. self.linear2 = Linear(64, 10)

 网络搭建完成:

  1. from torch import nn
  2. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear
  3. class Tuidui(nn.Module):
  4. def __init__(self):
  5. super(Tuidui,self).__init__()
  6. self.conv1 = Conv2d(3,32,5,padding=2)
  7. self.maxpool1 = MaxPool2d(kernel_size=2)
  8. self.conv2 = Conv2d(32,32,5,padding=2)
  9. self.maxpool2 = MaxPool2d(2)
  10. self.conv3 = Conv2d(32,64,5,padding=2)
  11. self.maxpool3 = MaxPool2d(2)
  12. self.flatten = Flatten()
  13. self.linear1 = Linear(1024,64)
  14. self.linear2 = Linear(64, 10)
  15. def forward(self,x):
  16. x = self.conv1(x)
  17. x = self.maxpool1(x)
  18. x = self.conv2(x)
  19. x = self.maxpool2(x)
  20. x = self.conv3(x)
  21. x = self.maxpool3(x)
  22. x = self.flatten(x)
  23. x = self.linear1(x)
  24. x = self.linear2(x)
  25. return x
  26. tuidui = Tuidui()
  27. print(tuidui)
  28. #Tuidui(
  29. (conv1): Conv2d(3, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  30. (maxpool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  31. (conv2): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  32. (maxpool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  33. (conv3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  34. (maxpool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  35. (flatten): Flatten(start_dim=1, end_dim=-1)
  36. (linear1): Linear(in_features=1024, out_features=64, bias=True)
  37. (linear2): Linear(in_features=64, out_features=10, bias=True)
  38. )

验证正确性:

  1. tuidui = Tuidui()
  2. print(tuidui)
  3. input = torch.ones(64,3,32,32)
  4. print(tuidui(input).shape)
  5. #torch.Size([64, 10])

一张图片产生10张,batch_size = 64 (想象是64张图片)

sequential

  1. import torch
  2. from torch import nn
  3. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
  4. class Tuidui(nn.Module):
  5. def __init__(self):
  6. super(Tuidui,self).__init__()
  7. self.model1 = Sequential(
  8. Conv2d(3, 32, 5, padding=2),
  9. MaxPool2d(kernel_size=2),
  10. Conv2d(32, 32, 5, padding=2),
  11. MaxPool2d(2),
  12. Conv2d(32, 64, 5, padding=2),
  13. MaxPool2d(2),
  14. Flatten(),
  15. Linear(1024, 64),
  16. Linear(64, 10)
  17. )
  18. def forward(self,x):
  19. x = self.model1(x)
  20. return x
  21. tuidui = Tuidui()
  22. print(tuidui)
  23. input = torch.ones(64,3,32,32)
  24. print(tuidui(input).shape)

可视化:

  1. writer = SummaryWriter("logs")
  2. writer.add_graph(tuidui,input)
  3. writer.close()
  4. tensorboard --logdir=logs

损失函数与反向传播

loss_function:计算实际输出与目标之间的差距,为更新输出提供一定的依据(反向传播)

以L1loss为例子(相减的绝对值的平均值)

  1. import torch
  2. from torch.nn import L1Loss
  3. inputs = torch.tensor([1,2,3],dtype=torch.float32)
  4. targets = torch.tensor([1,2,5],dtype=torch.float32)
  5. inputs = torch.reshape(inputs,(1,1,1,3)) #1batch_size,1channel,1行3列
  6. targets = torch.reshape(targets,(1,1,1,3))
  7. loss = L1Loss()
  8. result = loss(inputs,targets)
  9. print(result)
  10. #tensor(0.6667)

 以平方差为例子

  1. loss_mse = MSELoss()
  2. result_mse = loss_mse(inputs,targets)
  3. #tensor(1.3333)

交叉熵:

It is useful when training a classification problem with C classes. If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.

当训练分类问题的时候,这个分类问题有C个类别

计算公式如下 :

 

  1. import torch
  2. import torchvision.datasets
  3. from torch import nn
  4. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
  5. from torch.utils.data import DataLoader
  6. dataset = torchvision.datasets.CIFAR10("data",train=False,transform=torchvision.transforms.ToTensor(),download=True)
  7. dataloader = DataLoader(dataset,batch_size=1)
  8. class Tuidui(nn.Module):
  9. def __init__(self):
  10. super(Tuidui,self).__init__()
  11. self.model1 = Sequential(
  12. Conv2d(3, 32, 5, padding=2),
  13. MaxPool2d(kernel_size=2),
  14. Conv2d(32, 32, 5, padding=2),
  15. MaxPool2d(2),
  16. Conv2d(32, 64, 5, padding=2),
  17. MaxPool2d(2),
  18. Flatten(),
  19. Linear(1024, 64),
  20. Linear(64, 10)
  21. )
  22. def forward(self,x):
  23. x = self.model1(x)
  24. return x
  25. tuidui = Tuidui()
  26. for data in dataloader:
  27. imgs,targets = data
  28. output = tuidui(imgs)
  29. print(output)
  30. print(targets)

我们可以看到这是一张输入图片,他会放在神经网络中,然后得到一个输出,12345678910一共10个类别,每个代表的是图片预测类别的概率,这里显示的target是3

tensor([[ 0.0849,  0.1104,  0.0028,  0.1187,  0.0465,  0.0355,  0.0250, -0.1043,
         -0.0789, -0.1001]], grad_fn=<AddmmBackward0>)
tensor([3])
tensor([[ 0.0779,  0.0849,  0.0025,  0.1177,  0.0436,  0.0274,  0.0123, -0.0999,
         -0.0736, -0.1052]], grad_fn=<AddmmBackward0>)
tensor([8]) 

现在我们介绍交叉熵,得到的数是神经网络输出和正式输出的误差

  1. import torch
  2. import torchvision.datasets
  3. from torch import nn
  4. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
  5. from torch.utils.data import DataLoader
  6. dataset = torchvision.datasets.CIFAR10("data",train=False,transform=torchvision.transforms.ToTensor(),download=True)
  7. dataloader = DataLoader(dataset,batch_size=1)
  8. class Tuidui(nn.Module):
  9. def __init__(self):
  10. super(Tuidui,self).__init__()
  11. self.model1 = Sequential(
  12. Conv2d(3, 32, 5, padding=2),
  13. MaxPool2d(kernel_size=2),
  14. Conv2d(32, 32, 5, padding=2),
  15. MaxPool2d(2),
  16. Conv2d(32, 64, 5, padding=2),
  17. MaxPool2d(2),
  18. Flatten(),
  19. Linear(1024, 64),
  20. Linear(64, 10)
  21. )
  22. def forward(self,x):
  23. x = self.model1(x)
  24. return x
  25. loss = nn.CrossEntropyLoss()
  26. tuidui = Tuidui()
  27. for data in dataloader:
  28. imgs,targets = data
  29. output = tuidui(imgs)
  30. result_loss = loss(output,targets)
  31. print(result_loss)

 Files already downloaded and verified
tensor(2.3258, grad_fn=<NllLossBackward0>)
tensor(2.3186, grad_fn=<NllLossBackward0>)
tensor(2.3139, grad_fn=<NllLossBackward0>)

反向传播:

result_loss.backward()

根据反向传播求出梯度,然后根据梯度进行参数的更新优化

优化器

使用损失函数的时候调用backward就可以求出每一个需要调节的参数(对应的梯度),有了梯度就可以利用优化器对梯度进行调整,对误差进行降低。

  1. import torch
  2. import torchvision.datasets
  3. from torch import nn
  4. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
  5. from torch.utils.data import DataLoader
  6. #加载数据集,转化成tensor类型
  7. dataset = torchvision.datasets.CIFAR10("data",train=False,transform=torchvision.transforms.ToTensor(),download=True)
  8. #dataloader加载
  9. dataloader = DataLoader(dataset,batch_size=1)
  10. #创建相应的网络
  11. class Tuidui(nn.Module):
  12. def __init__(self):
  13. super(Tuidui,self).__init__()
  14. self.model1 = Sequential(
  15. Conv2d(3, 32, 5, padding=2),
  16. MaxPool2d(kernel_size=2),
  17. Conv2d(32, 32, 5, padding=2),
  18. MaxPool2d(2),
  19. Conv2d(32, 64, 5, padding=2),
  20. MaxPool2d(2),
  21. Flatten(),
  22. Linear(1024, 64),
  23. Linear(64, 10)
  24. )
  25. def forward(self,x):
  26. x = self.model1(x)
  27. return x
  28. #定义loss
  29. loss = nn.CrossEntropyLoss()
  30. #搭建出相应的网络
  31. tuidui = Tuidui()
  32. #优化器,随机梯度下降
  33. optim = torch.optim.SGD(tuidui.parameters(),lr=0.01)
  34. #取数据
  35. for data in dataloader:
  36. imgs,targets = data
  37. output = tuidui(imgs)
  38. #计算输出和target之间的差距
  39. result_loss = loss(output,targets)
  40. #把梯度设置为0
  41. optim.zero_grad()
  42. #得到梯度
  43. result_loss.backward()
  44. #调用优化器进行参数优化
  45. optim.step()
  46. print(result_loss)

打印出来后我们发现误差并没有明显的变化,因为深度学习要进行一轮一轮的学习,因此要加一个for训练多次

  1. #训练20次
  2. for epoch in range(20):
  3. running_loss = 0.0
  4. #取数据
  5. for data in dataloader:
  6. imgs,targets = data
  7. output = tuidui(imgs)
  8. #计算输出和target之间的差距
  9. result_loss = loss(output,targets)
  10. #把梯度设置为0
  11. optim.zero_grad()
  12. #得到梯度
  13. result_loss.backward()
  14. #调用优化器进行参数优化
  15. optim.step()
  16. running_loss = running_loss+result_loss
  17. print(running_loss)

我们可以看到,每一轮的误差不断的减少,训练的很慢

修改现有网络模型

在现有网络中添加线性层,从1000类别转化为10类别

vgg16_true.add_module('add_linear',nn.Linear(1000,10))

也可以直接修改不添加

vgg16_true[6] = nn.Linear(4096,10)

网络保存加载

torch.save(vgg16,"vgg16_method1.pth")        #保存网络模型结构和参数

torch.load("vgg16_method.pth")        #加载

troch.save(vgg16.state_dict())        #把vgg16保存为字典形式保存参数(官方推荐内存小)

模型训练套路

1、准备数据集

  1. train_data = torchvision.datasets.CIFAR10("data",train=True,transform=torchvision.transforms.ToTensor(),download=True)
  2. test_data = torchvision.datasets.CIFAR10("data",train=False,transform=torchvision.transforms.ToTensor(),download=True)

 2、加载数据集

  1. #dataloader加载
  2. train_dataloader = DataLoader(train_data,batch_size=64)
  3. test_dataloader = DataLoader(test_data,batch_size=64)

3、创建神经网络

  1. #创建神经网络
  2. class Tuidui(nn.Module):
  3. def __init__(self):
  4. super(Tuidui,self).__init__()
  5. self.model1 = Sequential(
  6. Conv2d(3, 32, 5, padding=2),
  7. MaxPool2d(kernel_size=2),
  8. Conv2d(32, 32, 5, padding=2),
  9. MaxPool2d(2),
  10. Conv2d(32, 64, 5, padding=2),
  11. MaxPool2d(2),
  12. Flatten(),
  13. Linear(1024, 64),
  14. Linear(64, 10)
  15. )
  16. def forward(self,x):
  17. x = self.model1(x)
  18. return x

 神经网络引入主程序

from model import *

 要注意神经网络的文件和主程序的文件在同一个文件夹下

4、创建网络模型

  1. #搭建出相应的网络
  2. tuidui = Tuidui()

5、损失函数

  1. #定义loss
  2. loss_fn = nn.CrossEntropyLoss()

 6、优化器

  1. #优化器,随机梯度下降
  2. learning_rate = 0.01
  3. optimizer = torch.optim.SGD(tuidui.parameters(),lr=learning_rate)

 7、设置训练网络参数

  1. #设置训练网络参数
  2. #记录训练次数
  3. total_train_step = 0
  4. total_test_step = 0
  5. #训练的轮数
  6. epoch = 10
  7. for i in range(epoch):
  8. print(f"第{i+1}轮训练开始")
  9. running_loss = 0.0
  10. #训练步骤开始
  11. for data in train_dataloader:
  12. imgs,targets = data
  13. output = tuidui(imgs)
  14. #计算输出和target之间的差距
  15. result_loss = loss_fn(output,targets)
  16. #优化器优化模型
  17. #把梯度设置为0
  18. optimizer.zero_grad()
  19. #得到梯度
  20. result_loss.backward()
  21. #调用优化器进行参数优化
  22. optimizer.step()
  23. running_loss = running_loss+result_loss
  24. total_train_step = total_train_step+1
  25. print(f"训练次数为{total_train_step},loss为{result_loss}")

8、测试

  1. #测试步骤
  2. total_test_loss = 0
  3. with torch.no_grad():
  4. for data in test_dataloader:
  5. imgs,targets = data
  6. output = tuidui(imgs)
  7. result_loss = loss_fn(output,targets)
  8. total_test_loss = total_test_loss +result_loss
  9. print(f"整体测试集上的loss:{total_test_loss}")

 9、可视化

writer.add_scalar("train_loss",result_loss,total_train_step)
writer.add_scalar("test_loss",total_test_loss,total_test_step)

 打开tensoboard

 10、保存

torch.save(tuidui,f"tuidui_{i}.pth")

保存到左侧目录中 

11、添加正确率

完整代码段

  1. import torch
  2. import torchvision.datasets
  3. from torch import nn
  4. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
  5. from torch.utils.data import DataLoader
  6. from torch.utils.tensorboard import SummaryWriter
  7. from module import *
  8. # 加载数据集,转化成tensor类型
  9. train_data = torchvision.datasets.CIFAR10("data", train=True, transform=torchvision.transforms.ToTensor(),
  10. download=True)
  11. test_data = torchvision.datasets.CIFAR10("data", train=False, transform=torchvision.transforms.ToTensor(),
  12. download=True)
  13. # 数据集长度
  14. train_data_size = len(train_data)
  15. test_data_size = len(test_data)
  16. print(f"训练数据集的长度为:{train_data_size}")
  17. print(f"测试数据集的长度为:{test_data_size}")
  18. # dataloader加载
  19. train_dataloader = DataLoader(train_data, batch_size=64)
  20. test_dataloader = DataLoader(test_data, batch_size=64)
  21. # 定义loss
  22. loss_fn = nn.CrossEntropyLoss()
  23. # 搭建出相应的网络
  24. tuidui = Tuidui()
  25. # 优化器,随机梯度下降
  26. learning_rate = 0.01
  27. optimizer = torch.optim.SGD(tuidui.parameters(), lr=learning_rate)
  28. # 设置训练网络参数
  29. # 记录训练次数
  30. total_train_step = 0
  31. total_test_step = 0
  32. # 训练的轮数
  33. epoch = 10
  34. # 添加tensorboard
  35. writer = SummaryWriter("logs_train")
  36. for i in range(epoch):
  37. print(f"第{i + 1}轮训练开始")
  38. running_loss = 0.0
  39. # 训练步骤开始
  40. for data in train_dataloader:
  41. imgs, targets = data
  42. output = tuidui(imgs)
  43. # 计算输出和target之间的差距
  44. result_loss = loss_fn(output, targets)
  45. # 优化器优化模型
  46. # 把梯度设置为0
  47. optimizer.zero_grad()
  48. # 得到梯度
  49. result_loss.backward()
  50. # 调用优化器进行参数优化
  51. optimizer.step()
  52. running_loss = running_loss + result_loss
  53. total_train_step = total_train_step + 1
  54. if total_train_step % 100 == 0:
  55. print(f"训练次数为{total_train_step},loss为{result_loss}")
  56. writer.add_scalar("train_loss", result_loss, total_train_step)
  57. # 测试步骤
  58. total_test_loss = 0
  59. total_accuracy = 0
  60. with torch.no_grad():
  61. for data in test_dataloader:
  62. imgs, targets = data
  63. output = tuidui(imgs)
  64. result_loss = loss_fn(output, targets)
  65. total_test_loss = total_test_loss + result_loss
  66. # 正确率
  67. accuracy = (output.argmax(1) == targets).sum()
  68. total_accuracy = total_accuracy + accuracy
  69. print(f"整体测试集上的loss:{total_test_loss}")
  70. print(f"整体测试集上的正确率:{total_accuracy / test_data_size}")
  71. writer.add_scalar("test_loss", total_test_loss, total_test_step)
  72. writer.add_scalar("test_accuracy", total_accuracy / test_data_size, total_test_step)
  73. total_test_step = total_test_step + 1
  74. # 保存
  75. torch.save(tuidui, f"tuidui_{i}.pth")
  76. # torch.save(tuidui.state_dict(),f"tuidui_{i}.pth")方式2保存
  77. print("模型已保存")
  78. writer.close()

 argmax函数

  1. In : a = np.array([[1, 3, 5, 7],[5, 7, 2, 2],[4, 6, 8, 1]])
  2. Out: [[1, 3, 5, 7],
  3. [5, 7, 2, 2],
  4. [4, 6, 8, 1]]
  5. In : b = np.argmax(a, axis=0) # 对数组按列方向搜索最大值
  6. Out: [1 1 2 0]
  7. In : b = np.argmax(a, axis=1) # 对数组按行方向搜索最大值
  8. Out: [3 1 2]
  1. import torch
  2. outputs = torch.tensor([[0.1,0.2],
  3. [0.05,0.4]])
  4. print(outputs.argmax(1))
  5. print(outputs.argmax(0))
  6. #结果
  7. tensor([1, 1])
  8. tensor([0, 1])

GPU训练

对网络模型,输入,损失函数添加.cuda(),使用GPU训练

为保证有GPU通常先加上下面命令 if torch.cuda.is_available():

完整代码

  1. import torch
  2. import torchvision.datasets
  3. from torch import nn
  4. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
  5. from torch.utils.data import DataLoader
  6. from torch.utils.tensorboard import SummaryWriter
  7. from module import *
  8. # 加载数据集,转化成tensor类型
  9. train_data = torchvision.datasets.CIFAR10("data", train=True, transform=torchvision.transforms.ToTensor(),
  10. download=True)
  11. test_data = torchvision.datasets.CIFAR10("data", train=False, transform=torchvision.transforms.ToTensor(),
  12. download=True)
  13. # 数据集长度
  14. train_data_size = len(train_data)
  15. test_data_size = len(test_data)
  16. print(f"训练数据集的长度为:{train_data_size}")
  17. print(f"测试数据集的长度为:{test_data_size}")
  18. # dataloader加载
  19. train_dataloader = DataLoader(train_data, batch_size=64)
  20. test_dataloader = DataLoader(test_data, batch_size=64)
  21. #创建神经网络
  22. class Tuidui(nn.Module):
  23. def __init__(self):
  24. super(Tuidui,self).__init__()
  25. self.model1 = Sequential(
  26. Conv2d(3, 32, 5, padding=2),
  27. MaxPool2d(kernel_size=2),
  28. Conv2d(32, 32, 5, padding=2),
  29. MaxPool2d(2),
  30. Conv2d(32, 64, 5, padding=2),
  31. MaxPool2d(2),
  32. Flatten(),
  33. Linear(1024, 64),
  34. Linear(64, 10)
  35. )
  36. def forward(self,x):
  37. x = self.model1(x)
  38. return x
  39. # 搭建出相应的网络
  40. tuidui = Tuidui()
  41. if torch.cuda.is_available():
  42. tuidui = tuidui.cuda()
  43. # 定义loss
  44. loss_fn = nn.CrossEntropyLoss()
  45. loss_fn = loss_fn.cuda()
  46. # 优化器,随机梯度下降
  47. learning_rate = 0.01
  48. optimizer = torch.optim.SGD(tuidui.parameters(), lr=learning_rate)
  49. # 设置训练网络参数
  50. # 记录训练次数
  51. total_train_step = 0
  52. total_test_step = 0
  53. # 训练的轮数
  54. epoch = 10
  55. # 添加tensorboard
  56. writer = SummaryWriter("logs_train")
  57. for i in range(epoch):
  58. print(f"第{i + 1}轮训练开始")
  59. running_loss = 0.0
  60. # 训练步骤开始
  61. for data in train_dataloader:
  62. imgs, targets = data
  63. imgs = imgs.cuda()
  64. targets = targets.cuda()
  65. output = tuidui(imgs)
  66. # 计算输出和target之间的差距
  67. result_loss = loss_fn(output, targets)
  68. # 优化器优化模型
  69. # 把梯度设置为0
  70. optimizer.zero_grad()
  71. # 得到梯度
  72. result_loss.backward()
  73. # 调用优化器进行参数优化
  74. optimizer.step()
  75. running_loss = running_loss + result_loss
  76. total_train_step = total_train_step + 1
  77. if total_train_step % 100 == 0:
  78. print(f"训练次数为{total_train_step},loss为{result_loss}")
  79. writer.add_scalar("train_loss", result_loss, total_train_step)
  80. # 测试步骤
  81. total_test_loss = 0
  82. total_accuracy = 0
  83. with torch.no_grad():
  84. for data in test_dataloader:
  85. imgs, targets = data
  86. imgs = imgs.cuda()
  87. targets = targets.cuda()
  88. output = tuidui(imgs)
  89. result_loss = loss_fn(output, targets)
  90. total_test_loss = total_test_loss + result_loss
  91. # 正确率
  92. accuracy = (output.argmax(1) == targets).sum()
  93. total_accuracy = total_accuracy + accuracy
  94. print(f"整体测试集上的loss:{total_test_loss}")
  95. print(f"整体测试集上的正确率:{total_accuracy / test_data_size}")
  96. writer.add_scalar("test_loss", total_test_loss, total_test_step)
  97. writer.add_scalar("test_accuracy", total_accuracy / test_data_size, total_test_step)
  98. total_test_step = total_test_step + 1
  99. # 保存
  100. torch.save(tuidui, f"tuidui_{i}.pth")
  101. # torch.save(tuidui.state_dict(),f"tuidui_{i}.pth")方式2保存
  102. print("模型已保存")
  103. writer.close()

模型验证

现在我们下载狗狗的照片到img文件下(之前宇智波鼬的位置)

训练时用的模型注意使用CPU训练出的模型,如果使用GPU训练的的模型会报错.cuda,采用GPU训练的模型在CPU加载过程中一定要对应过来,对应到CPU上

model = torch.load("tuidui_0.pth",map_location=torch.device('cpu'))

 下面是CPU训练的模型

  1. import torch
  2. import torchvision.transforms
  3. from PIL import Image
  4. from torch import nn
  5. from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
  6. image_path = "F:\python\example\image\dog.jpg"
  7. image = Image.open(image_path)
  8. print(image)
  9. transform = torchvision.transforms.Compose([torchvision.transforms.Resize((32,32)),
  10. torchvision.transforms.ToTensor()])
  11. image = transform(image)
  12. print(image.shape) #torch.size([3,32,32])
  13. #创建神经网络
  14. class Tuidui(nn.Module):
  15. def __init__(self):
  16. super(Tuidui,self).__init__()
  17. self.model1 = Sequential(
  18. Conv2d(3, 32, 5, padding=2),
  19. MaxPool2d(kernel_size=2),
  20. Conv2d(32, 32, 5, padding=2),
  21. MaxPool2d(2),
  22. Conv2d(32, 64, 5, padding=2),
  23. MaxPool2d(2),
  24. Flatten(),
  25. Linear(1024, 64),
  26. Linear(64, 10)
  27. )
  28. def forward(self,x):
  29. x = self.model1(x)
  30. return x
  31. #加载模型
  32. model = torch.load("tuidui_0.pth") #使用CPU的训练模型
  33. #img输入模型中
  34. image = torch.reshape(image,(1,3,32,32))
  35. with torch.no_grad():
  36. output = model(image) #模型需要4维参数
  37. print(output)
  38. print(output.argmax(1))

<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1280x1946 at 0x24CD068FDC0>
torch.Size([3, 32, 32])
tensor([[-1.1488, -0.2753,  0.7588,  0.7703,  0.7203,  0.9781,  0.7678,  0.3184,
         -2.0950, -1.3502]])

tensor([5])

我们可以看到第6个数最大,即预测的是第6个类别,即tenso[5],通过debug我们看到相应类别是狗,表示猜对

下面是GPU训练的模型 

  1. import torch
  2. import torchvision.transforms
  3. from PIL import Image
  4. from torch import nn
  5. from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
  6. image_path = "F:\python\example\image\dog.jpg"
  7. image = Image.open(image_path)
  8. print(image)
  9. transform = torchvision.transforms.Compose([torchvision.transforms.Resize((32,32)),
  10. torchvision.transforms.ToTensor()])
  11. image = transform(image)
  12. print(image.shape) #torch.size([3,32,32])
  13. #创建神经网络
  14. class Tuidui(nn.Module):
  15. def __init__(self):
  16. super(Tuidui,self).__init__()
  17. self.model1 = Sequential(
  18. Conv2d(3, 32, 5, padding=2),
  19. MaxPool2d(kernel_size=2),
  20. Conv2d(32, 32, 5, padding=2),
  21. MaxPool2d(2),
  22. Conv2d(32, 64, 5, padding=2),
  23. MaxPool2d(2),
  24. Flatten(),
  25. Linear(1024, 64),
  26. Linear(64, 10)
  27. )
  28. def forward(self,x):
  29. x = self.model1(x)
  30. return x
  31. #加载模型
  32. model = torch.load("tuidui_0.pth",map_location=torch.device('cpu')) #使用GPU的训练模型
  33. #img输入模型中
  34. image = torch.reshape(image,(1,3,32,32))
  35. with torch.no_grad():
  36. output = model(image) #模型需要4维参数
  37. print(output)
  38. print(output.argmax(1))

 

tips:

  • CTRL+P:显示函数所需参数。
  • 帮写内置函数:
  •  注意加载数据集的时候使用的是add_images,注意是s
  • terminal中,使用先进行pytorch的激活。
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/123546
推荐阅读
相关标签
  

闽ICP备14008679号