当前位置:   article > 正文

pytorch 神经网络套路 实现多维输入特征的分类:MNIST手写数字分类(pytorch中的hello world)_pytorch 多层感知机用于 mnist 手写数字数据集分类

pytorch 多层感知机用于 mnist 手写数字数据集分类

1.数据集:

MNIST:手写数字 

下载方法:见代码段,dataset的下载属性download设置为True

2.模型:

本文中,去掉了64维特征的线性层,直接由128维特征降为10维,作为输出。

LOSS:采用CrossEntropyLoss,交叉熵

优化器:SGD随机梯度下降

3.python代码:

  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from torch.nn import Linear, ReLU, CrossEntropyLoss
  5. from torch.optim import SGD
  6. from torch.utils.data import DataLoader
  7. from torchvision import transforms
  8. import matplotlib.pyplot as plt
  9. # 定义计算设备,如果gpu可用就用cuda加速,否则使用cpu计算
  10. device = ("cuda:0" if torch.cuda.is_available() else "cpu")
  11. # 准备测试和验证集
  12. train_dataset = torchvision.datasets.MNIST("./data_set/train_dataset", train=True, transform=transforms.Compose(
  13. [transforms.ToTensor(), transforms.Normalize(0.1307, 0.3081)]), download=True)
  14. val_dataset = torchvision.datasets.MNIST("./data_set/val_dataset", train=True, transform=transforms.Compose(
  15. [transforms.ToTensor(), transforms.Normalize(0.1307, 0.3081)]), download=True)
  16. train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
  17. val_loader = DataLoader(val_dataset, batch_size=64, shuffle=True)
  18. # 建立模型
  19. class model(nn.Module):
  20. def __init__(self):
  21. super(model, self).__init__()
  22. self.linear1 = Linear(784, 512, bias=True)
  23. self.linear2 = Linear(512, 256, bias=True)
  24. self.linear3 = Linear(256, 128, bias=True)
  25. self.linear4 = Linear(128, 10, bias=True)
  26. self.activate = ReLU()
  27. def forward(self, x):
  28. x = x.view(-1, 28 * 28)
  29. x = self.linear1(x)
  30. x = self.activate(x)
  31. x = self.linear2(x)
  32. x = self.activate(x)
  33. x = self.linear3(x)
  34. x = self.activate(x)
  35. x = self.linear4(x)
  36. return x
  37. # 模型类实例化
  38. my_model = model()
  39. # gpu加速
  40. my_model.to(device)
  41. # 交叉熵损失函数
  42. loss_cal = CrossEntropyLoss(size_average=True)
  43. # gpu加速
  44. loss_cal = loss_cal.to(device)
  45. # 优化器
  46. optimizer = SGD(my_model.parameters(), lr=0.01, momentum=0.1)
  47. # 训练函数
  48. def train():
  49. loss_sum = 0
  50. for i, data in enumerate(train_loader):
  51. imgs, labels = data
  52. # gpu加速
  53. imgs = imgs.to(device)
  54. labels = labels.to(device)
  55. # 前向计算
  56. outs = my_model(imgs)
  57. loss = loss_cal(outs, labels)
  58. loss_sum = loss_sum + loss.item()
  59. # 梯度清零
  60. optimizer.zero_grad()
  61. # 反向传播
  62. loss.backward()
  63. # 参数优化
  64. optimizer.step()
  65. if i % 50 == 49:
  66. loss_list.append(loss_sum / 50)
  67. print("loss:" + str(loss_sum / 50))
  68. loss_sum = 0
  69. # 验证函数
  70. def val(epoch):
  71. correct = 0
  72. total = 0
  73. with torch.no_grad():
  74. for i, data in enumerate(val_loader):
  75. imgs, labels = data
  76. imgs = imgs.to(device)
  77. labels = labels.to(device)
  78. outs = my_model(imgs)
  79. value, index = torch.max(outs.data, dim=1)
  80. total += labels.size(0)
  81. correct += (index == labels).sum().item()
  82. print("epoch:" + str(epoch) + "accuracy:" + str(correct / float(total)))
  83. if __name__ == "__main__":
  84. loss_list = []
  85. for epoch in range(10):
  86. train()
  87. val(epoch)
  88. plt.figure()
  89. plt.plot(loss_list)
  90. plt.show()
  91. # 部分输出结果
  92. # loss:0.08146183107048273
  93. # loss:0.09409760734066368
  94. # loss:0.08619683284312486
  95. # loss:0.0925342983007431
  96. # loss:0.10231521874666213
  97. # loss:0.09232219552621246
  98. # loss:0.08993134327232838
  99. # loss:0.08221664376556874
  100. # loss:0.08144009610638023
  101. # loss:0.08002457775175571
  102. # loss:0.09039016446098685
  103. # loss:0.09695547364652157
  104. # loss:0.08471773650497198
  105. # loss:0.09309148231521248
  106. # loss:0.08477838601917029
  107. # loss:0.08601660847663879
  108. # loss:0.08170276714488864
  109. # loss:0.0860577792301774
  110. # epoch:8accuracy:0.9759166666666667
  111. # loss:0.08367479223757983
  112. # loss:0.08048359720036387
  113. # loss:0.07391996223479509
  114. # loss:0.08173186900094151
  115. # loss:0.07706649962812662
  116. # loss:0.08126413892954588
  117. # loss:0.07230079263448715
  118. # loss:0.08468491364270449
  119. # loss:0.06976747298613191
  120. # loss:0.07847631502896547
  121. # loss:0.07585323289036751
  122. # loss:0.08995127085596323
  123. # loss:0.0836580672301352
  124. # loss:0.07555182477459312
  125. # loss:0.09279871977865696
  126. # loss:0.06770527206361293
  127. # loss:0.08244827814400196
  128. # loss:0.07539539625868201
  129. # epoch:9accuracy:0.9793333333333333

4.可视化结果:

 随着训练次数增加,loss逐渐减小并收敛。

5.以上均为个人学习pytorch基础入门中的基础,浅做记录,如有错误,请各位大佬批评指正!

6.关于问题描述和原理的部分图片参考刘老师的视频课件,本文也是课后作业的一部分,特此附上视频链接,《PyTorch深度学习实践》完结合集_哔哩哔哩_bilibili,希望大家都有所进步!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小蓝xlanll/article/detail/235360
推荐阅读
相关标签
  

闽ICP备14008679号