当前位置:   article > 正文

遇到问题EGNet(1): 测试报错_non-broadcastable output operand with shape (1,) d

non-broadcastable output operand with shape (1,) doesn't match the broadcast

1 输入指令:(PS:模式test 显著性数据集是ECSSD)

python3 run.py --mode test --sal_mode e

2 模型&数据集放置如下:(PS:请问选中的这两个的放置位置是否正确?对的,另外这个ECSSD是自己下载的,里面内容改名如下
  

3 报错如下:(PS:报错核心部分,原因是图片没有读取到

ValueError: non-broadcastable output operand with shape () doesn't match the broadcast shape (3,)
File Not Exists

4 备注:(PS:我把你的相对路径改成了相对路径,对应如下)
run.py

  1. if __name__ == '__main__':
  2. # vgg_path = '/home/liuj/code/Messal/weights/vgg16_20M.pth'
  3. vgg_path = './weights/vgg16_20M.pth'
  4. # vgg_path = '/home/liuj/code/Messal/weights/resnet50_caffe.pth'
  5. resnet_path = './weights/resnet50_caffe.pth'

dataset.py (PS:这部分我只改了test_model == 1的 e 数据集,因为用的这个,
        self.image_root = './ECSSD/'  # todo 去掉这个Imgs不然会重复 Imgs/Imgs
 

  1. class ImageDataTest(data.Dataset):
  2. def __init__(self, test_mode=1, sal_mode='e'):
  3. if test_mode == 0:
  4. # self.image_root = '/home/liuj/dataset/saliency_test/ECSSD/Imgs/'
  5. # self.image_source = '/home/liuj/dataset/saliency_test/ECSSD/test.lst'
  6. self.image_root = '/home/liuj/dataset/HED-BSDS_PASCAL/HED-BSDS/test/'
  7. self.image_source = '/home/liuj/dataset/HED-BSDS_PASCAL/HED-BSDS/test.lst'
  8. elif test_mode == 1:
  9. if sal_mode == 'e':
  10. # self.image_root = '/home/liuj/dataset/saliency_test/ECSSD/Imgs/' # todo
  11. # self.image_root = './ECSSD/Imgs/'
  12. self.image_root = './ECSSD/'
  13. # self.image_source = '/home/liuj/dataset/saliency_test/ECSSD/test.lst'
  14. self.image_source = './ECSSD/test.lst'
  15. # self.test_fold = '/media/ubuntu/disk/Result/saliency/ECSSD/'
  16. self.test_fold = './Result/saliency/ECSSD/'

./ECSSD/test.lst 因为github没有提供,是我自己写的,如下左边.
目录放置结构如右
  
这样才对,上面是用来训练的

然后输出图片要注意,name[:-4]要改成name[3:-4],这样会生成一个s文件夹

最终路径如下 ./Result/saliency/ECSSD/EGNet_ResNet50/s/806.png

5 下面是完整报错内容:(下述问题,通过上面办法已经解决

  1. (env_py36) hp@hp-X599:~/zjc/nk_PyCharm/PyCharm_project/EGNet-master$ python3 run.py --mode test --sal_mode e
  2. trueUnify bone part
  3. TUN_bone(
  4. (convert): ConvertLayer(
  5. (convert0): ModuleList(
  6. (0): Sequential(
  7. (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  8. (1): ReLU(inplace)
  9. )
  10. (1): Sequential(
  11. (0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  12. (1): ReLU(inplace)
  13. )
  14. (2): Sequential(
  15. (0): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  16. (1): ReLU(inplace)
  17. )
  18. (3): Sequential(
  19. (0): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  20. (1): ReLU(inplace)
  21. )
  22. (4): Sequential(
  23. (0): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  24. (1): ReLU(inplace)
  25. )
  26. )
  27. )
  28. (base): ResNet(
  29. (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  30. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  31. (relu): ReLU(inplace)
  32. (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=True)
  33. (layer1): Sequential(
  34. (0): Bottleneck(
  35. (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  36. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  37. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  38. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  39. (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  40. (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  41. (relu): ReLU(inplace)
  42. (downsample): Sequential(
  43. (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  44. (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  45. )
  46. )
  47. (1): Bottleneck(
  48. (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  49. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  50. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  51. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  52. (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  53. (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  54. (relu): ReLU(inplace)
  55. )
  56. (2): Bottleneck(
  57. (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  58. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  59. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  60. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  61. (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  62. (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  63. (relu): ReLU(inplace)
  64. )
  65. )
  66. (layer2): Sequential(
  67. (0): Bottleneck(
  68. (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
  69. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  70. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  71. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  72. (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  73. (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  74. (relu): ReLU(inplace)
  75. (downsample): Sequential(
  76. (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
  77. (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  78. )
  79. )
  80. (1): Bottleneck(
  81. (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  82. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  83. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  84. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  85. (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  86. (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  87. (relu): ReLU(inplace)
  88. )
  89. (2): Bottleneck(
  90. (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  91. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  92. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  93. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  94. (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  95. (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  96. (relu): ReLU(inplace)
  97. )
  98. (3): Bottleneck(
  99. (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  100. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  101. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  102. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  103. (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  104. (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  105. (relu): ReLU(inplace)
  106. )
  107. )
  108. (layer3): Sequential(
  109. (0): Bottleneck(
  110. (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
  111. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  112. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  113. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  114. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  115. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  116. (relu): ReLU(inplace)
  117. (downsample): Sequential(
  118. (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
  119. (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  120. )
  121. )
  122. (1): Bottleneck(
  123. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  124. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  125. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  126. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  127. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  128. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  129. (relu): ReLU(inplace)
  130. )
  131. (2): Bottleneck(
  132. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  133. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  134. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  135. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  136. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  137. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  138. (relu): ReLU(inplace)
  139. )
  140. (3): Bottleneck(
  141. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  142. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  143. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  144. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  145. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  146. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  147. (relu): ReLU(inplace)
  148. )
  149. (4): Bottleneck(
  150. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  151. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  152. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  153. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  154. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  155. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  156. (relu): ReLU(inplace)
  157. )
  158. (5): Bottleneck(
  159. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  160. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  161. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  162. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  163. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  164. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  165. (relu): ReLU(inplace)
  166. )
  167. )
  168. (layer4): Sequential(
  169. (0): Bottleneck(
  170. (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  171. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  172. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
  173. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  174. (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
  175. (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  176. (relu): ReLU(inplace)
  177. (downsample): Sequential(
  178. (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
  179. (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  180. )
  181. )
  182. (1): Bottleneck(
  183. (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  184. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  185. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
  186. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  187. (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
  188. (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  189. (relu): ReLU(inplace)
  190. )
  191. (2): Bottleneck(
  192. (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  193. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  194. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
  195. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  196. (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
  197. (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  198. (relu): ReLU(inplace)
  199. )
  200. )
  201. )
  202. (merge1): MergeLayer1(
  203. (trans): ModuleList(
  204. (0): Sequential(
  205. (0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  206. (1): ReLU(inplace)
  207. )
  208. (1): Sequential(
  209. (0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  210. (1): ReLU(inplace)
  211. )
  212. (2): Sequential(
  213. (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  214. (1): ReLU(inplace)
  215. )
  216. )
  217. (up): ModuleList(
  218. (0): Sequential(
  219. (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  220. (1): ReLU(inplace)
  221. (2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  222. (3): ReLU(inplace)
  223. (4): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  224. (5): ReLU(inplace)
  225. )
  226. (1): Sequential(
  227. (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  228. (1): ReLU(inplace)
  229. (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  230. (3): ReLU(inplace)
  231. (4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  232. (5): ReLU(inplace)
  233. )
  234. (2): Sequential(
  235. (0): Conv2d(512, 512, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  236. (1): ReLU(inplace)
  237. (2): Conv2d(512, 512, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  238. (3): ReLU(inplace)
  239. (4): Conv2d(512, 512, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  240. (5): ReLU(inplace)
  241. )
  242. (3): Sequential(
  243. (0): Conv2d(512, 512, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  244. (1): ReLU(inplace)
  245. (2): Conv2d(512, 512, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  246. (3): ReLU(inplace)
  247. (4): Conv2d(512, 512, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  248. (5): ReLU(inplace)
  249. )
  250. (4): Sequential(
  251. (0): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
  252. (1): ReLU(inplace)
  253. (2): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
  254. (3): ReLU(inplace)
  255. (4): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
  256. (5): ReLU(inplace)
  257. )
  258. )
  259. (score): ModuleList(
  260. (0): Conv2d(128, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  261. (1): Conv2d(256, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  262. (2): Conv2d(512, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  263. (3): Conv2d(512, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  264. (4): Conv2d(512, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  265. )
  266. (relu): ReLU()
  267. )
  268. (merge2): MergeLayer2(
  269. (trans): ModuleList(
  270. (0): ModuleList(
  271. (0): Sequential(
  272. (0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  273. (1): ReLU(inplace)
  274. )
  275. (1): Sequential(
  276. (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  277. (1): ReLU(inplace)
  278. )
  279. (2): Sequential(
  280. (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  281. (1): ReLU(inplace)
  282. )
  283. (3): Sequential(
  284. (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  285. (1): ReLU(inplace)
  286. )
  287. )
  288. )
  289. (up): ModuleList(
  290. (0): ModuleList(
  291. (0): Sequential(
  292. (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  293. (1): ReLU(inplace)
  294. (2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  295. (3): ReLU(inplace)
  296. (4): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  297. (5): ReLU(inplace)
  298. )
  299. (1): Sequential(
  300. (0): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  301. (1): ReLU(inplace)
  302. (2): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  303. (3): ReLU(inplace)
  304. (4): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  305. (5): ReLU(inplace)
  306. )
  307. (2): Sequential(
  308. (0): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  309. (1): ReLU(inplace)
  310. (2): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  311. (3): ReLU(inplace)
  312. (4): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  313. (5): ReLU(inplace)
  314. )
  315. (3): Sequential(
  316. (0): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
  317. (1): ReLU(inplace)
  318. (2): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
  319. (3): ReLU(inplace)
  320. (4): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
  321. (5): ReLU(inplace)
  322. )
  323. )
  324. )
  325. (score): ModuleList(
  326. (0): ModuleList(
  327. (0): Conv2d(128, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  328. (1): Conv2d(128, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  329. (2): Conv2d(128, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  330. (3): Conv2d(128, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  331. )
  332. )
  333. (final_score): Sequential(
  334. (0): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  335. (1): ReLU(inplace)
  336. (2): Conv2d(128, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  337. )
  338. (relu): ReLU()
  339. )
  340. )
  341. The number of parameters: 111692618
  342. Loading pre-trained model from ./epoch_resnet.pth...
  343. File Not Exists
  344. File Not Exists
  345. File Not Exists
  346. File Not Exists
  347. File Not Exists
  348. File Not Exists
  349. Traceback (most recent call last):
  350. File "run.py", line 69, in <module>
  351. File Not Exists
  352. File Not Exists
  353. main(config)
  354. File "run.py", line 23, in main
  355. test.test(test_mode=config.test_mode)
  356. File "/home/hp/zjc/nk_PyCharm/PyCharm_project/EGNet-master/solver.py", line 109, in test
  357. for i, data_batch in enumerate(self.test_loader):
  358. File "/home/hp/miniconda3/envs/env_py36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 336, in __next__
  359. return self._process_next_batch(batch)
  360. File "/home/hp/miniconda3/envs/env_py36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 357, in _process_next_batch
  361. raise batch.exc_type(batch.exc_msg)
  362. ValueError: Traceback (most recent call last):
  363. File "/home/hp/miniconda3/envs/env_py36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 106, in _worker_loop
  364. samples = collate_fn([dataset[i] for i in batch_indices])
  365. File "/home/hp/miniconda3/envs/env_py36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 106, in <listcomp>
  366. samples = collate_fn([dataset[i] for i in batch_indices])
  367. File "/home/hp/zjc/nk_PyCharm/PyCharm_project/EGNet-master/dataset.py", line 99, in __getitem__
  368. image, im_size = load_image_test(os.path.join(self.image_root, self.image_list[item]))
  369. File "/home/hp/zjc/nk_PyCharm/PyCharm_project/EGNet-master/dataset.py", line 145, in load_image_test
  370. in_ -= np.array((104.00699, 116.66877, 122.67892))
  371. ValueError: non-broadcastable output operand with shape () doesn't match the broadcast shape (3,)
  372. File Not Exists

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家自动化/article/detail/370411
推荐阅读
相关标签
  

闽ICP备14008679号