当前位置:   article > 正文

deeplearning with pytorch (四)

deeplearning with pytorch (四)

1.Convolutional Neural Network Model

torch.Tensor.view — PyTorch 2.2 documentation

神经网络中,使用激活函数(如ReLU)是为了引入非线性,使得网络能够学习和模拟复杂的函数映射。ReLU(Rectified Linear Unit)激活函数因其简单性和效率而广泛使用,特别是在隐藏层中。然而,在网络的最后一层使用激活函数的决策取决于特定任务的需求:

  1. 对于分类任务

    • 如果是多类分类问题,通常在最后一层使用softmax激活函数,因为softmax可以将输出转换为概率分布,每个类别的概率和为1。
    • 对于二分类问题,有时使用sigmoid激活函数将输出压缩到0和1之间,表示为概率。
  2. 对于回归任务

    • 最后一层通常不使用激活函数,因为我们希望预测连续值,而不是将其限制在特定的范围内(例如,ReLU将所有负值设为0,这对于回归任务可能不合适)

LogSoftmax — PyTorch 2.2 documentation

2. Train and Test CNN Model 

  1. import time
  2. start_time = time.time()
  3. # create varibles to track things
  4. epochs = 5
  5. train_losses = []
  6. test_losses = []
  7. train_correct = []
  8. test_correct = []
  9. # for loop of epochs
  10. for i in range(epochs):
  11. trn_corr = 0
  12. tst_corr = 0
  13. #Train
  14. for b, (X_train, y_train) in enumerate(train_loader):
  15. b += 1 # start out batches at 1
  16. y_pred = model(X_train) # get predicted values from the training set,Not flattened;
  17. loss = criterion(y_pred, y_train) #how off we are,compare the predicitons to correct answer to y_train
  18. predicted = torch.max(y_pred.data, 1)[1] # add up the number of correct predictions. indexed off the first point
  19. batch_corr = (predicted == y_train).sum() # how many we got correct from this batch
  20. trn_corr += batch_corr # keep track as we go along in trainging
  21. #update out parameters
  22. optimizer.zero_grad()
  23. loss.backward()
  24. optimizer.step()
  25. #print out some results
  26. if b%600 == 0:
  27. print(f'Epoch: {i} Batch: {b} Loss:{loss.item()}')
  28. train_losses.append(loss)
  29. train_correct.append(trn_corr)
  30. # Test
  31. with torch.no_grad(): #No gradient so we don't update our weight and biases with this test
  32. for b, (X_test, y_test) in enumerate(test_loader):
  33. y_val = model(X_test)
  34. predicted = torch.max(y_val.data, 1)[1] # add up the number of correct predictions. indexed off the first point
  35. tst_corr += (predicted == y_test).sum()
  36. loss = criterion(y_val, y_test)
  37. test_losses.append(loss)
  38. test_correct.append(tst_corr)
  39. current_time = time.time()
  40. total = current_time - start_time
  41. print(f'Training Took: {total/60} minutes!')

训练和测试过程

  1. ConvolutonalNetaaaWork(
  2. (conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1))
  3. (conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1))
  4. (fc1): Linear(in_features=400, out_features=120, bias=True)
  5. (fc2): Linear(in_features=120, out_features=84, bias=True)
  6. (fc3): Linear(in_features=84, out_features=10, bias=True)
  7. )
  8. Epoch: 0 Batch: 600 Loss:0.16236098110675812
  9. Epoch: 0 Batch: 1200 Loss:0.16147294640541077
  10. Epoch: 0 Batch: 1800 Loss:0.46548572182655334
  11. Epoch: 0 Batch: 2400 Loss:0.14589160680770874
  12. Epoch: 0 Batch: 3000 Loss:0.006830060388892889
  13. Epoch: 0 Batch: 3600 Loss:0.4129134714603424
  14. Epoch: 0 Batch: 4200 Loss:0.004275710787624121
  15. Epoch: 0 Batch: 4800 Loss:0.002969620516523719
  16. Epoch: 0 Batch: 5400 Loss:0.04636438935995102
  17. Epoch: 0 Batch: 6000 Loss:0.000430782965850085
  18. Epoch: 1 Batch: 600 Loss:0.002715964335948229
  19. Epoch: 1 Batch: 1200 Loss:0.17854242026805878
  20. Epoch: 1 Batch: 1800 Loss:0.0020668990910053253
  21. Epoch: 1 Batch: 2400 Loss:0.0038429438136518
  22. Epoch: 1 Batch: 3000 Loss:0.03475978597998619
  23. Epoch: 1 Batch: 3600 Loss:0.2954908013343811
  24. Epoch: 1 Batch: 4200 Loss:0.02363143488764763
  25. Epoch: 1 Batch: 4800 Loss:0.00022474219440482557
  26. Epoch: 1 Batch: 5400 Loss:0.0005058477981947362
  27. Epoch: 1 Batch: 6000 Loss:0.29113149642944336
  28. Epoch: 2 Batch: 600 Loss:0.11854789406061172
  29. Epoch: 2 Batch: 1200 Loss:0.003075268818065524
  30. Epoch: 2 Batch: 1800 Loss:0.0007867529056966305
  31. Epoch: 2 Batch: 2400 Loss:0.025718092918395996
  32. Epoch: 2 Batch: 3000 Loss:0.020713506266474724
  33. Epoch: 2 Batch: 3600 Loss:0.0005251148249953985
  34. Epoch: 2 Batch: 4200 Loss:0.02623259648680687
  35. Epoch: 2 Batch: 4800 Loss:0.0008421383099630475
  36. Epoch: 2 Batch: 5400 Loss:0.12240316718816757
  37. Epoch: 2 Batch: 6000 Loss:0.1951633244752884
  38. Epoch: 3 Batch: 600 Loss:0.0012102334294468164
  39. Epoch: 3 Batch: 1200 Loss:0.003382322611287236
  40. Epoch: 3 Batch: 1800 Loss:0.002483583288267255
  41. Epoch: 3 Batch: 2400 Loss:8.7084794358816e-05
  42. Epoch: 3 Batch: 3000 Loss:0.0006959225866012275
  43. Epoch: 3 Batch: 3600 Loss:0.0016453089192509651
  44. Epoch: 3 Batch: 4200 Loss:0.04044409096240997
  45. Epoch: 3 Batch: 4800 Loss:4.738060670206323e-05
  46. Epoch: 3 Batch: 5400 Loss:0.1202053427696228
  47. Epoch: 3 Batch: 6000 Loss:0.14659245312213898
  48. Epoch: 4 Batch: 600 Loss:0.018919644877314568
  49. Epoch: 4 Batch: 1200 Loss:0.07315998524427414
  50. Epoch: 4 Batch: 1800 Loss:0.07178398221731186
  51. Epoch: 4 Batch: 2400 Loss:0.0009470336954109371
  52. Epoch: 4 Batch: 3000 Loss:0.0004728620406240225
  53. Epoch: 4 Batch: 3600 Loss:0.24831190705299377
  54. Epoch: 4 Batch: 4200 Loss:0.0003230355796404183
  55. Epoch: 4 Batch: 4800 Loss:0.0002209811209468171
  56. Epoch: 4 Batch: 5400 Loss:0.04399774223566055
  57. Epoch: 4 Batch: 6000 Loss:0.00020674565166700631
  58. Training Took: 1.3477467536926269 minutes!

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/219412
推荐阅读
相关标签
  

闽ICP备14008679号