当前位置:   article > 正文

ValueError: Using a target size (torch.Size([64])) that is different to the input size (torch.Size([

using a target size (torch.size([64])) that is different to the input size (
  1. ValueError Traceback (most recent call last)
  2. ~\AppData\Local\Temp/ipykernel_19156/279535578.py in <module>
  3. 54 #计算损失函数
  4. 55 label.data.fill_(1)
  5. ---> 56 error_real=criterion(output, label)
  6. 57 error_real.backward() #辨别器的反向误差传播
  7. 58 D_x=output.data.mean()
  8. c:\users\25566\miniconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
  9. 1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
  10. 1101 or _global_forward_hooks or _global_forward_pre_hooks):
  11. -> 1102 return forward_call(*input, **kwargs)
  12. 1103 # Do not call functions when jit is used
  13. 1104 full_backward_hooks, non_full_backward_hooks = [], []

 

报错信息如上。报错信息说输入的尺寸和目标尺寸不同,导致的错误,所以解决方法是找到之前代码的output和label的尺寸赋值,改变如图上output和label的尺寸。

  1. # 构造判别器
  2. class ModelD(nn.Module):
  3. def __init__(self):
  4. super(ModelD,self).__init__()
  5. self.model=nn.Sequential() #序列化模块构造的神经网络
  6. self.model.add_module('conv1',nn.Conv2d(num_channels, num_features, 5, 2, 0, bias=False)) #卷积层
  7. self.model.add_module('relu1',nn.ReLU()) #激活函数使用了ReLu
  8. #self.model.add_module('relu1',nn.LeakyReLU(0.2, inplace = True)) #激活函数使用了leakyReLu,可以防止dead ReLu的问题
  9. #第二层卷积
  10. self.model.add_module('conv2',nn.Conv2d(num_features, num_features * 2, 5, 2, 0, bias=False))
  11. self.model.add_module('bnorm2',nn.BatchNorm2d(num_features * 2))
  12. self.model.add_module('linear1', nn.Linear(num_features * 2 * 4 * 4, #全链接网络层
  13. num_features))
  14. self.model.add_module('linear2', nn.Linear(num_features, 1)) #全链接网络层
  15. self.model.add_module('sigmoid',nn.Sigmoid())
  16. def forward(self,input):
  17. output = input
  18. # 对网络中的所有神经模块进行循环,并挑选出特定的模块linear1,将feature map展平
  19. for name, module in self.model.named_children():
  20. if name == 'linear1':
  21. output = output.view(-1, num_features * 2 * 4 * 4)
  22. output = module(output)
  23. output = output.squeeze(-1)
  24. return output

源代码里面是没有  output = output.squeeze(-1)的,添加这行代码即可解决问题。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/511640
推荐阅读
相关标签
  

闽ICP备14008679号