当前位置:   article > 正文

Pytorch深度学习:卷积神经网络(基础篇)_stride 2脳2

stride 2脳2

基本概念:

        全连接网络

                整个网络都使用全连接网络进行连接,则成为全连接网络

        卷积神经网络:

                直接对图像进行操作,保留了更多的图像特征,前面的卷积和采样部分称为特征提取,           后面的全连接部分称为分类

卷积层:

        使用RGB图像,输入通道是3,输出通道是卷积核的数量

单通道计算方式:

        对应相乘相加

 多通道运算方式:

        再次累加

 一个拥有三个通道的输入图像经过卷积处理后变成了单通道的输出图像

单个卷积核得到单通道:

 

多个卷积核得到多通道:

 

padding操作:

在输入层外围加自己需要多少层的0

 

stride操作:

stride为2,每次移动两个单位,有效解决长度和宽度问题

 maxpooling操作:

2x2默认stride为2 

一个简单的例子:

 代码实现:

  1. import torch
  2. from torchvision import transforms
  3. from torchvision import datasets
  4. from torch.utils.data import DataLoader
  5. import torch.nn.functional as F
  6. import torch.optim as optim
  7. import matplotlib.pyplot as plt
  8. batch_size = 64
  9. transform = transforms.Compose([transforms.ToTensor(),
  10. transforms.Normalize((0.1307,),(0.3081,))])
  11. train_dataset = datasets.MNIST(root = '../dataset/mnist',
  12. train = True,
  13. download = True,
  14. transform = transform)
  15. train_loader = DataLoader(train_dataset,
  16. shuffle = True,
  17. batch_size = batch_size)
  18. test_dataset = datasets.MNIST(root = '../dataset/mnist',
  19. train = False,
  20. download = True,
  21. transform = transform)
  22. test_loader = DataLoader(test_dataset,
  23. shuffle = True,
  24. batch_size = batch_size)
  25. class Net (torch.nn.Module):
  26. def __init__(self):
  27. super(Net,self).__init__()
  28. self.conv1 = torch.nn.Conv2d(1,10,kernel_size=5)
  29. self.conv2 = torch.nn.Conv2d(10,20,kernel_size=5)
  30. self.pooling = torch.nn.MaxPool2d(2)
  31. self.fc = torch.nn.Linear(320,10)
  32. def forward (self,x):
  33. batch_size = x.size(0)
  34. x = self.pooling(F.relu(self.conv1(x)))
  35. x = self.pooling(F.relu(self.conv2(x)))
  36. x = x.view(batch_size,-1)
  37. x = self.fc(x)
  38. return x
  39. model = Net()
  40. device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
  41. model.to(device)
  42. criterion = torch.nn.CrossEntropyLoss()
  43. optimizer = optim.SGD(model.parameters(),lr = 0.01,momentum = 0.5)
  44. def train(epoch):
  45. running_loss = 0.0
  46. for batch_idx, data in enumerate(train_loader, 0):
  47. inputs, target = data
  48. inputs, target = inputs.to(device), target.to(device)
  49. optimizer.zero_grad()
  50. outputs = model(inputs)
  51. loss = criterion(outputs, target)
  52. loss.backward()
  53. optimizer.step()
  54. running_loss += loss.item()
  55. if batch_idx % 300 == 299:
  56. print('[%d, %5d] loss: %.3f' % (epoch+1, batch_idx+1, running_loss/300))
  57. running_loss = 0.0
  58. def test():
  59. correct = 0
  60. total = 0
  61. with torch.no_grad():
  62. for data in test_loader:
  63. images, labels = data
  64. images, labels = images.to(device), labels.to(device)
  65. outputs = model(images)
  66. _, predicted = torch.max(outputs.data, dim=1)
  67. total += labels.size(0)
  68. correct += (predicted == labels).sum().item()
  69. print('accuracy on test set: %d %% ' % (100*correct/total))
  70. return correct/total
  71. if __name__ == '__main__':
  72. epoch_list = []
  73. acc_list = []
  74. for epoch in range(10):
  75. train(epoch)
  76. acc = test()
  77. epoch_list.append(epoch)
  78. acc_list.append(acc)
  79. plt.plot(epoch_list,acc_list)
  80. plt.ylabel('accuracy')
  81. plt.xlabel('epoch')
  82. plt.show()

打印图表:

 

 

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/194049
推荐阅读
相关标签
  

闽ICP备14008679号