当前位置:   article > 正文

卷积神经网络(基础篇)_卷积神经网络入门项目

卷积神经网络入门项目

说明 0、前一部分叫做Feature Extraction,后一部分叫做classification

        1、每一个卷积核它的通道数量要求和输入通道是一样的。这种卷积核的总数有多少个和你输出通道的数量是一样的。

        2、卷积(convolution)后,C(Channels)变,W(width)和H(Height)可变可不变,取决于是否padding。subsampling(或pooling)后,C不变,W和H变。

        3、卷积层:保留图像的空间信息。

       4、卷积层要求输入输出是四维张量(B,C,W,H),全连接层的输入与输出都是二维张量(B,Input_feature)。

             传送门 PyTorch的nn.Linear()详解

      5、卷积(线性变换),激活函数(非线性变换),池化;这个过程若干次后,view打平,进入全连接层~

 

 

 


1. 卷积操作

  1. import torch
  2. # 定义输入、输出通道
  3. in_channels, out_channels = 5, 10
  4. # 定义图像尺寸
  5. width, height = 100, 100
  6. # 定义卷积核的大小,下式表示大小为3*3的正方形,同时,卷积核的通道数与输入图像的通道数一致,均为5
  7. kernel_size = 3
  8. # 定义一次输入图像的数量
  9. batch_size = 1
  10. input = torch.randn(batch_size,
  11. in_channels,
  12. width,
  13. height)
  14. # out_channels 决定了卷积核的数量, 即一共有10个3*3*5的卷积核
  15. conv_layer = torch.nn.Conv2d(in_channels,
  16. out_channels,
  17. kernel_size=kernel_size)
  18. output = conv_layer(input)
  19. print(input.shape)
  20. print(output.shape)
  21. print(conv_layer.weight.shape)

输出:

  1. torch.Size([1, 5, 100, 100])
  2. torch.Size([1, 10, 98, 98])
  3. torch.Size([10, 5, 3, 3])

有时,我们希望获得与原图像相同大小的卷积后的图像,这时需要属性padding,默认为0

  1. conv_layer_with_padding = torch.nn.Conv2d(in_channels,
  2. out_channels,
  3. padding=1,
  4. kernel_size = kernel_size)
  5. output_with_padding = conv_layer_with_padding(input)
  6. print(output_with_padding.shape)

输出:

torch.Size([1, 10, 100, 100])

还有时,我们希望再次降低网络的大小,以降低运算量。此时引入卷积核移动步长stride的概念,默认为1

  1. conv_layer_with_stride = torch.nn.Conv2d(in_channels,
  2. out_channels,
  3. stride=2,
  4. kernel_size=kernel_size)
  5. output_with_stride = conv_layer_with_stride(input)
  6. print(output_with_stride.shape)

输出:

torch.Size([1, 10, 49, 49])

2. 下采样

下采样与卷积无本质区别,不同的在于目的。下采样的目的是将数据维度再次减少。
最常用的下采样手段是Max Pooling 最大池化。

  1. input = [
  2. 3,4,6,5,
  3. 2,4,6,8,
  4. 1,6,7,8,
  5. 9,7,4,6,
  6. ]
  7. input = torch.Tensor(input).view(1,1,4,4)
  8. maxpooling_layer = torch.nn.MaxPool2d(kernel_size=2)
  9. # 注意,我们将kernel_size设为2,此时stride默认也为2
  10. output = maxpooling_layer(input)
  11. print(output)

输出:

  1. tensor([[[[4., 8.],
  2. [9., 8.]]]])

3. 卷积神经基础代码

 

代码说明:

1、torch.nn.Conv2d(1,10,kernel_size=3,stride=2,bias=False)

 1是指输入的Channel,灰色图像是1维的;10是指输出的Channel,也可以说第一个卷积层需要10个卷积核;kernel_size=3,卷积核大小是3x3;stride=2进行卷积运算时的步长,默认为1;bias=False卷积运算是否需要偏置bias,默认为False。padding = 0,卷积操作是否补0。

2、self.fc = torch.nn.Linear(320, 10),这个320获取的方式,可以通过x = x.view(batch_size, -1)

# print(x.shape)可得到(64,320),64指的是batch,320就是指要进行全连接操作时,输入的特征维度。

  1. import torch
  2. from torchvision import transforms
  3. from torchvision import datasets
  4. from torch.utils.data import DataLoader
  5. import torch.nn.functional as F
  6. import torch.optim as optim
  7. import matplotlib.pyplot as plt
  8. # prepare dataset
  9. batch_size = 64
  10. transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
  11. train_dataset = datasets.MNIST(root='../dataset/mnist/', train=True,
  12. download=True, transform=transform)
  13. train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)
  14. test_dataset = datasets.MNIST(root='../dataset/mnist/', train=False,
  15. download=True, transform=transform)
  16. test_loader = DataLoader(test_dataset, shuffle=False, batch_size=batch_size)
  17. # design model using class
  18. class Net(torch.nn.Module):
  19. def __init__(self):
  20. super(Net, self).__init__()
  21. self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
  22. self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
  23. self.pooling = torch.nn.MaxPool2d(2)
  24. self.fc = torch.nn.Linear(320, 10)
  25. def forward(self, x):
  26. # flatten data from (n,1,28,28) to (n, 784)
  27. batch_size = x.size(0)
  28. x = F.relu(self.pooling(self.conv1(x)))
  29. x = F.relu(self.pooling(self.conv2(x)))
  30. x = x.view(batch_size, -1) # -1 此处自动算出的是320
  31. # print("x.shape",x.shape)
  32. x = self.fc(x)
  33. return x
  34. model = Net()
  35. device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
  36. model.to(device)
  37. # construct loss and optimizer
  38. criterion = torch.nn.CrossEntropyLoss()
  39. optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
  40. # training cycle forward, backward, update
  41. def train(epoch):
  42. running_loss = 0.0
  43. for batch_idx, data in enumerate(train_loader, 0):
  44. inputs, target = data
  45. inputs, target = inputs.to(device), target.to(device)
  46. optimizer.zero_grad()
  47. outputs = model(inputs)
  48. loss = criterion(outputs, target)
  49. loss.backward()
  50. optimizer.step()
  51. running_loss += loss.item()
  52. if batch_idx % 300 == 299:
  53. print('[%d, %5d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
  54. running_loss = 0.0
  55. def test():
  56. correct = 0
  57. total = 0
  58. with torch.no_grad():
  59. for data in test_loader:
  60. images, labels = data
  61. images, labels = images.to(device), labels.to(device)
  62. outputs = model(images)
  63. _, predicted = torch.max(outputs.data, dim=1)
  64. total += labels.size(0)
  65. correct += (predicted == labels).sum().item()
  66. print('accuracy on test set: %d %% ' % (100 * correct / total))
  67. return correct / total
  68. if __name__ == '__main__':
  69. epoch_list = []
  70. acc_list = []
  71. for epoch in range(10):
  72. train(epoch)
  73. acc = test()
  74. epoch_list.append(epoch)
  75. acc_list.append(acc)
  76. plt.plot(epoch_list, acc_list)
  77. plt.ylabel('accuracy')
  78. plt.xlabel('epoch')
  79. plt.show()

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小惠珠哦/article/detail/816595
推荐阅读
相关标签
  

闽ICP备14008679号