当前位置:   article > 正文

Pytorch之经典神经网络CNN(Extra-1) —— CNN可视化(查看中间层feature_map)_pytorch获取深度学习中的feature map

pytorch获取深度学习中的feature map

从ZF-Net开始人们就在研究神经网络和filter的可视化

结合了多篇文章整理了好久,网上写的真的鱼龙混杂,代码具有可操作性的不多.......也或者是我的coding水平还没到?sad.

CNN可视化各层输出的feature map

(5条消息) Pytorch(十四) —— 查看中间层feature_map & 卷积核权重可视化_hxxjxw的博客-CSDN博客

一样

只不过这个不是用hook,更低效一点

  1. import os
  2. import torch
  3. import torchvision as tv
  4. import torchvision.transforms as transforms
  5. import torch.nn as nn
  6. import torch.optim as optim
  7. import argparse
  8. import skimage.data
  9. import skimage.io
  10. import skimage.transform
  11. import numpy as np
  12. import matplotlib.pyplot as plt
  13. import torchvision.models as models
  14. from PIL import Image
  15. import cv2
  16. #提取某一层网络特征图
  17. class FeatureExtractor(nn.Module):
  18. def __init__(self, submodule, extracted_layers):
  19. super(FeatureExtractor, self).__init__()
  20. self.submodule = submodule
  21. self.extracted_layers = extracted_layers
  22. def forward(self, x):
  23. outputs = {}
  24. for name, module in self.submodule._modules.items():
  25. if "fc" in name:
  26. x = x.view(x.size(0), -1)
  27. x = module(x)
  28. print(name)
  29. if (self.extracted_layers is None) or (name in self.extracted_layers and 'fc' not in name):
  30. outputs[name] = x
  31. # print(outputs)
  32. return outputs
  33. def get_picture(pic_name, transform):
  34. img = skimage.io.imread(pic_name)
  35. img = skimage.transform.resize(img, (256, 256)) #读入图片时将图片resize成(256,256)的
  36. img = np.asarray(img, dtype=np.float32)
  37. return transform(img)
  38. def make_dirs(path):
  39. if os.path.exists(path) is False:
  40. os.makedirs(path)
  41. pic_dir = 'dataset/dogsvscats/train/cat.1700.jpg'
  42. transform = transforms.ToTensor()
  43. img = get_picture(pic_dir, transform)
  44. # 插入维度
  45. img = img.unsqueeze(0)
  46. device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
  47. img = img.to(device)
  48. net = models.resnet101(pretrained=True).to(device)
  49. dst = './feautures'
  50. therd_size = 256
  51. myexactor = FeatureExtractor(submodule=net, extracted_layers=None)
  52. output = myexactor(img)
  53. #output是dict
  54. #dict_keys(['conv1', 'bn1', 'relu', 'maxpool', 'layer1', 'layer2', 'layer3', 'layer4', 'avgpool', 'fc'])
  55. for idx,val in enumerate(output.items()):
  56. k,v = val
  57. features = v[0]
  58. iter_range = features.shape[0]
  59. for i in range(iter_range):
  60. # plt.imshow(features.data.cpu().numpy()[i,:,:],cmap='jet')
  61. if 'fc' in k: #不可视化fc层
  62. continue
  63. feature = features.data.cpu().numpy()
  64. feature_img = feature[i, :, :]
  65. feature_img = np.asarray(feature_img * 255, dtype=np.uint8)
  66. dst_path = os.path.join(dst, str(idx)+'-'+k)
  67. make_dirs(dst_path)
  68. feature_img = cv2.applyColorMap(feature_img, cv2.COLORMAP_JET)
  69. if feature_img.shape[0] < therd_size:
  70. tmp_file = os.path.join(dst_path, str(i) + '_' + str(therd_size) + '.png')
  71. tmp_img = feature_img.copy()
  72. tmp_img = cv2.resize(tmp_img, (therd_size, therd_size), interpolation=cv2.INTER_NEAREST)
  73. cv2.imwrite(tmp_file, tmp_img)
  74. dst_file = os.path.join(dst_path, str(i) + '.png')
  75. cv2.imwrite(dst_file, feature_img)

得到的feature_img是0-255范围的

输入的原图是kaggle的猫狗数据集中的一张图片

依次提取的各层

conv1
bn1
relu
maxpool
layer1
layer2
layer3
layer4
avgpool
fc

输出

因为到后期的图片会越来越小,所以我们有一个缩放操作,每张图片有一个输出的原图,还有一个放大后的图片

0-conv1

1-bn1

2-relu

3-maxpool

4-layer1

5-layer2

6-layer3

7-layer4

8-avgpool

可以看出,第一层的卷积层输出,特征图里面还可以看出猫的形状,到了后面卷积网络的输出特征图,看着有点像热力图,并且完全没有猫的样子,是更加抽象的图片表达

CNN 卷积核权重可视化

我们一个conv层,比如有64个filter,每个filter又是个三维的,要扫过R,G,B通道,这里可视化的时候只选择了每个filter的第一个channel来显示

当打平画直方图,就是它了(5条消息) Pytorch(十四) —— 查看中间层feature_map & 卷积核权重可视化_hxxjxw的博客-CSDN博客

  1. import torch
  2. import torchvision.models as models
  3. import matplotlib.pyplot as plt
  4. from PIL import Image
  5. from torchvision import transforms
  6. input_image = Image.open('dataset/dogsvscats/train/cat.1700.jpg')
  7. preprocess = transforms.Compose([
  8. transforms.Resize(256),
  9. transforms.CenterCrop(224),
  10. transforms.ToTensor(),
  11. transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
  12. ])
  13. input_tensor = preprocess(input_image)
  14. input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model
  15. model = models.alexnet(pretrained=True)
  16. if torch.cuda.is_available():
  17. input_batch = input_batch.to('cuda')
  18. model.to('cuda')
  19. with torch.no_grad():
  20. output = model(input_batch)
  21. #卷积可视化
  22. #将数据灌入模型后,pytorch框架会进行对应的前向传播,要对卷积核可视化,我们需要把卷积核从框架中提取出来。多谢torch提供的接口,我们可以直接把对应层的权重取出
  23. for layer in dict(model.features.named_children()).keys():
  24. if layer not in ['0','3','6','8','10']: #只有conv层可以可视化,maxpooling层和relu层不能可视化
  25. continue
  26. filter = dict(model.features.named_children())[layer]
  27. filter = filter.weight.cpu().clone()
  28. print("total of number of filter : ", len(filter))
  29. num = len(filter)
  30. plt.figure(figsize=(20, 17))
  31. for i in range(1,64):
  32. plt.subplot(9, 9, i)
  33. plt.axis('off')
  34. plt.imshow(filter[i][0, :, :].detach(),cmap='gray')
  35. plt.show()

conv1

conv2

conv3

conv4

conv5

可以看出第一层卷积核 人类还是可以比较容易理解,有些提取的是边缘,有些提取的是圆形,有些提取的是斑点等。

最后一层卷积层的卷积核就已经看不出来是提取的什么东西了,即卷积核提取的是更加抽象的特征。

https://nbviewer.jupyter.org/github/fchollet/deep-learning-with-python-notebooks/blob/master/5.4-visualizing-what-convnets-learn.ipynb

使用pytorch查看中间层特征矩阵以及卷积核参数_越前浩波的博客-CSDN博客_pytorch卷积核参数

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/凡人多烦事01/article/detail/717024
推荐阅读
相关标签
  

闽ICP备14008679号