当前位置:   article > 正文

deeplearning with pytorch (五)

deeplearning with pytorch (五)

.view()方法在PyTorch中用于重塑张量。这里它被用来将单个样本的张量重塑成模型所期望的输入形状。具体地,1,1,28,28意味着创建一个新的张量,其中:

  • 第一个1代表批次大小(batch size),这里为1,因为你只预测一个样本。
  • 第二个1可能代表颜色通道的数量,这在处理灰度图像时常见,意味着每个像素只有一个颜色值。对于RGB图像,这个数字会是3。
  • 28,28代表图像的高度和宽度,这是典型的MNIST手写数字数据集的维度。
  1. #graph the loss at epoch
  2. train_losses = [tl.item() for tl in train_losses]
  3. plt.plot(train_losses, label= "training loss")
  4. plt.plot(test_losses, label="validation loss")
  5. plt.title("loss at epoch")
  6. plt.legend()
  7. #graph the accuracy at the end of each epoch
  8. plt.plot([t/600 for t in train_correct], label = "training accuracy")
  9. plt.plot([t/100 for t in test_correct], label = "validation accuracy")
  10. plt.title("accuracy at the end of each epoch")
  11. plt.legend()
  12. test_load_everything = DataLoader(test_data, batch_size= 10000, shuffle= False)
  13. with torch.no_grad():
  14. correct = 0
  15. for X_test, y_test in test_load_everything:
  16. y_val = model(X_test)
  17. predicted = torch.max(y_val, 1)[1]
  18. correct += (predicted == y_test).sum()
  19. # did for correct
  20. correct.item()/len(test_data) * 100
  21. ## Send New Image Thru The Model
  22. # grab an image
  23. test_data[4143] #tensor with an image in it ... at end ,it shows the label
  24. # grab just the data
  25. test_data[4143][0]
  26. #reshape it
  27. test_data[4143][0].reshape(28,28)
  28. # show the image
  29. plt.imshow(test_data[4143][0].reshape(28,28))
  30. # pass the image thru our model
  31. model.eval()
  32. with torch.no_grad():
  33. new_prediction = model(test_data[4143][0].view(1,1,28,28)) #batch size of 1,1 color channel, 28x28 image
  34. # check the new prediction, get probabilities
  35. new_prediction
  36. new_prediction.argmax()

完整的py文件见GitHub - daichang01/neraual_network_learning at dev

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/217613
推荐阅读
相关标签
  

闽ICP备14008679号