赞
踩
torch.Tensor.view — PyTorch 2.2 documentation
在神经网络中,使用激活函数(如ReLU)是为了引入非线性,使得网络能够学习和模拟复杂的函数映射。ReLU(Rectified Linear Unit)激活函数因其简单性和效率而广泛使用,特别是在隐藏层中。然而,在网络的最后一层使用激活函数的决策取决于特定任务的需求:
对于分类任务:
softmax
激活函数,因为softmax
可以将输出转换为概率分布,每个类别的概率和为1。sigmoid
激活函数将输出压缩到0和1之间,表示为概率。对于回归任务:
LogSoftmax — PyTorch 2.2 documentation
- import time
- start_time = time.time()
- # create varibles to track things
- epochs = 5
- train_losses = []
- test_losses = []
- train_correct = []
- test_correct = []
-
- # for loop of epochs
- for i in range(epochs):
- trn_corr = 0
- tst_corr = 0
-
-
- #Train
- for b, (X_train, y_train) in enumerate(train_loader):
- b += 1 # start out batches at 1
- y_pred = model(X_train) # get predicted values from the training set,Not flattened;
- loss = criterion(y_pred, y_train) #how off we are,compare the predicitons to correct answer to y_train
-
- predicted = torch.max(y_pred.data, 1)[1] # add up the number of correct predictions. indexed off the first point
- batch_corr = (predicted == y_train).sum() # how many we got correct from this batch
- trn_corr += batch_corr # keep track as we go along in trainging
-
- #update out parameters
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
-
- #print out some results
- if b%600 == 0:
- print(f'Epoch: {i} Batch: {b} Loss:{loss.item()}')
-
- train_losses.append(loss)
- train_correct.append(trn_corr)
-
-
-
- # Test
- with torch.no_grad(): #No gradient so we don't update our weight and biases with this test
- for b, (X_test, y_test) in enumerate(test_loader):
- y_val = model(X_test)
- predicted = torch.max(y_val.data, 1)[1] # add up the number of correct predictions. indexed off the first point
- tst_corr += (predicted == y_test).sum()
-
- loss = criterion(y_val, y_test)
- test_losses.append(loss)
- test_correct.append(tst_corr)
-
-
- current_time = time.time()
- total = current_time - start_time
- print(f'Training Took: {total/60} minutes!')

训练和测试过程
ConvolutonalNetaaaWork( (conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1)) (conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1)) (fc1): Linear(in_features=400, out_features=120, bias=True) (fc2): Linear(in_features=120, out_features=84, bias=True) (fc3): Linear(in_features=84, out_features=10, bias=True) ) Epoch: 0 Batch: 600 Loss:0.16236098110675812 Epoch: 0 Batch: 1200 Loss:0.16147294640541077 Epoch: 0 Batch: 1800 Loss:0.46548572182655334 Epoch: 0 Batch: 2400 Loss:0.14589160680770874 Epoch: 0 Batch: 3000 Loss:0.006830060388892889 Epoch: 0 Batch: 3600 Loss:0.4129134714603424 Epoch: 0 Batch: 4200 Loss:0.004275710787624121 Epoch: 0 Batch: 4800 Loss:0.002969620516523719 Epoch: 0 Batch: 5400 Loss:0.04636438935995102 Epoch: 0 Batch: 6000 Loss:0.000430782965850085 Epoch: 1 Batch: 600 Loss:0.002715964335948229 Epoch: 1 Batch: 1200 Loss:0.17854242026805878 Epoch: 1 Batch: 1800 Loss:0.0020668990910053253 Epoch: 1 Batch: 2400 Loss:0.0038429438136518 Epoch: 1 Batch: 3000 Loss:0.03475978597998619 Epoch: 1 Batch: 3600 Loss:0.2954908013343811 Epoch: 1 Batch: 4200 Loss:0.02363143488764763 Epoch: 1 Batch: 4800 Loss:0.00022474219440482557 Epoch: 1 Batch: 5400 Loss:0.0005058477981947362 Epoch: 1 Batch: 6000 Loss:0.29113149642944336 Epoch: 2 Batch: 600 Loss:0.11854789406061172 Epoch: 2 Batch: 1200 Loss:0.003075268818065524 Epoch: 2 Batch: 1800 Loss:0.0007867529056966305 Epoch: 2 Batch: 2400 Loss:0.025718092918395996 Epoch: 2 Batch: 3000 Loss:0.020713506266474724 Epoch: 2 Batch: 3600 Loss:0.0005251148249953985 Epoch: 2 Batch: 4200 Loss:0.02623259648680687 Epoch: 2 Batch: 4800 Loss:0.0008421383099630475 Epoch: 2 Batch: 5400 Loss:0.12240316718816757 Epoch: 2 Batch: 6000 Loss:0.1951633244752884 Epoch: 3 Batch: 600 Loss:0.0012102334294468164 Epoch: 3 Batch: 1200 Loss:0.003382322611287236 Epoch: 3 Batch: 1800 Loss:0.002483583288267255 Epoch: 3 Batch: 2400 Loss:8.7084794358816e-05 Epoch: 3 Batch: 3000 Loss:0.0006959225866012275 Epoch: 3 Batch: 3600 Loss:0.0016453089192509651 Epoch: 3 Batch: 4200 Loss:0.04044409096240997 Epoch: 3 Batch: 4800 Loss:4.738060670206323e-05 Epoch: 3 Batch: 5400 Loss:0.1202053427696228 Epoch: 3 Batch: 6000 Loss:0.14659245312213898 Epoch: 4 Batch: 600 Loss:0.018919644877314568 Epoch: 4 Batch: 1200 Loss:0.07315998524427414 Epoch: 4 Batch: 1800 Loss:0.07178398221731186 Epoch: 4 Batch: 2400 Loss:0.0009470336954109371 Epoch: 4 Batch: 3000 Loss:0.0004728620406240225 Epoch: 4 Batch: 3600 Loss:0.24831190705299377 Epoch: 4 Batch: 4200 Loss:0.0003230355796404183 Epoch: 4 Batch: 4800 Loss:0.0002209811209468171 Epoch: 4 Batch: 5400 Loss:0.04399774223566055 Epoch: 4 Batch: 6000 Loss:0.00020674565166700631 Training Took: 1.3477467536926269 minutes!
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。