当前位置:   article > 正文

PyTorch中的model.modules(), model.children(), model.named_children(), model.parameters()等

PyTorch中的model.modules(), model.children(), model.named_children(), model.parameters()等

文通过一个例子实验来观察并讲解PyTorchmodel.modules(), model.named_modules(), model.children(), model.named_children(), model.parameters(), model.named_parameters(), model.state_dict()这些model实例方法的返回值。例子如下:

  1. import torch
  2. import torch.nn as nn
  3. class Net(nn.Module):
  4. def __init__(self, num_class=10):
  5. super().__init__()
  6. self.features = nn.Sequential(
  7. nn.Conv2d(in_channels=3, out_channels=6, kernel_size=3),
  8. nn.BatchNorm2d(6),
  9. nn.ReLU(inplace=True),
  10. nn.MaxPool2d(kernel_size=2, stride=2),
  11. nn.Conv2d(in_channels=6, out_channels=9, kernel_size=3),
  12. nn.BatchNorm2d(9),
  13. nn.ReLU(inplace=True),
  14. nn.MaxPool2d(kernel_size=2, stride=2)
  15. )
  16. self.classifier = nn.Sequential(
  17. nn.Linear(9*8*8, 128),
  18. nn.ReLU(inplace=True),
  19. nn.Dropout(),
  20. nn.Linear(128, num_class)
  21. )
  22. def forward(self, x):
  23. output = self.features(x)
  24. output = output.view(output.size()[0], -1)
  25. output = self.classifier(output)
  26. return output
  27. model = Net()

如上代码定义了一个由两层卷积层,两层全连接层组成的网络模型。值得注意的是,这个Net由外到内有3个层次:

  1. Net:
  2. ----features:
  3. ------------Conv2d
  4. ------------BatchNorm2d
  5. ------------ReLU
  6. ------------MaxPool2d
  7. ------------Conv2d
  8. ------------BatchNorm2d
  9. ------------ReLU
  10. ------------MaxPool2d
  11. ----classifier:
  12. ------------Linear
  13. ------------ReLU
  14. ------------Dropout
  15. ------------Linear

网络Net本身是一个nn.Module的子类,它又包含了features和classifier两个由Sequential容器组成的nn.Module子类,features和classifier各自又包含众多的网络层,它们都属于nn.Module子类,所以从外到内共有3个层次。
下面我们来看这几个实例方法的返回值都是什么。

  1. In [7]: model.named_modules()
  2. Out[7]: <generator object Module.named_modules at 0x7f5db88f3840>
  3. In [8]: model.modules()
  4. Out[8]: <generator object Module.modules at 0x7f5db3f53c00>
  5. In [9]: model.children()
  6. Out[9]: <generator object Module.children at 0x7f5db3f53408>
  7. In [10]: model.named_children()
  8. Out[10]: <generator object Module.named_children at 0x7f5db80305e8>
  9. In [11]: model.parameters()
  10. Out[11]: <generator object Module.parameters at 0x7f5db3f534f8>
  11. In [12]: model.named_parameters()
  12. Out[12]: <generator object Module.named_parameters at 0x7f5d42da7570>
  13. In [13]: model.state_dict()
  14. Out[13]:
  15. OrderedDict([('features.0.weight', tensor([[[[ 0.1200, -0.1627, -0.0841],
  16. [-0.1369, -0.1525, 0.0541],
  17. [ 0.1203, 0.0564, 0.0908]],
  18. ……

可以看出,除了model.state_dict()返回的是一个字典,其他几个方法返回值都显示的是一个生成器,是一个可迭代变量,我们通过列表推导式用for循环将返回值取出来进一步进行观察:

  1. In [14]: model_modules = [x for x in model.modules()]
  2. In [15]: model_named_modules = [x for x in model.named_modules()]
  3. In [16]: model_children = [x for x in model.children()]
  4. In [17]: model_named_children = [x for x in model.named_children()]
  5. In [18]: model_parameters = [x for x in model.parameters()]
  6. In [19]: model_named_parameters = [x for x in model.named_parameters()]

1. model.modules()

model.modules()迭代遍历模型的所有子层,所有子层即指nn.Module子类,在本文的例子中,Net(), features(), classifier(),以及nn.xxx构成的卷积,池化,ReLU, Linear, BN, Dropout等都是nn.Module子类,也就是model.modules()会迭代的遍历它们所有对象。我们看一下列表model_modules:

  1. In [20]: model_modules
  2. Out[20]:
  3. [Net(
  4. (features): Sequential(
  5. (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
  6. (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  7. (2): ReLU(inplace)
  8. (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  9. (4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
  10. (5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  11. (6): ReLU(inplace)
  12. (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  13. )
  14. (classifier): Sequential(
  15. (0): Linear(in_features=576, out_features=128, bias=True)
  16. (1): ReLU(inplace)
  17. (2): Dropout(p=0.5)
  18. (3): Linear(in_features=128, out_features=10, bias=True)
  19. )
  20. ),
  21. Sequential(
  22. (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
  23. (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  24. (2): ReLU(inplace)
  25. (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  26. (4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
  27. (5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  28. (6): ReLU(inplace)
  29. (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  30. ),
  31. Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1)),
  32. BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),
  33. ReLU(inplace),
  34. MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False),
  35. Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1)),
  36. BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),
  37. ReLU(inplace),
  38. MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False),
  39. Sequential(
  40. (0): Linear(in_features=576, out_features=128, bias=True)
  41. (1): ReLU(inplace)
  42. (2): Dropout(p=0.5)
  43. (3): Linear(in_features=128, out_features=10, bias=True)
  44. ),
  45. Linear(in_features=576, out_features=128, bias=True),
  46. ReLU(inplace),
  47. Dropout(p=0.5),
  48. Linear(in_features=128, out_features=10, bias=True)]
  49. In [21]: len(model_modules)
  50. Out[21]: 15

可以看出,model_modules列表中共有15个元素,首先是整个Net,然后遍历了Net下的features子层,进一步遍历了feature下的所有层,然后又遍历了classifier子层以及其下的所有层。所以说model.modules()能够迭代地遍历模型的所有子层。

2. model.named_modules()

顾名思义,它就是有名字的model.modules()。model.named_modules()不但返回模型的所有子层,还会返回这些层的名字:

  1. In [28]: len(model_named_modules)
  2. Out[28]: 15
  3. In [29]: model_named_modules
  4. Out[29]:
  5. [('', Net(
  6. (features): Sequential(
  7. (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
  8. (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  9. (2): ReLU(inplace)
  10. (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  11. (4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
  12. (5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  13. (6): ReLU(inplace)
  14. (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  15. )
  16. (classifier): Sequential(
  17. (0): Linear(in_features=576, out_features=128, bias=True)
  18. (1): ReLU(inplace)
  19. (2): Dropout(p=0.5)
  20. (3): Linear(in_features=128, out_features=10, bias=True)
  21. )
  22. )),
  23. ('features', Sequential(
  24. (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
  25. (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  26. (2): ReLU(inplace)
  27. (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  28. (4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
  29. (5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  30. (6): ReLU(inplace)
  31. (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  32. )),
  33. ('features.0', Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))),
  34. ('features.1', BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)), ('features.2', ReLU(inplace)),
  35. ('features.3', MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)),
  36. ('features.4', Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))),
  37. ('features.5', BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)), ('features.6', ReLU(inplace)),
  38. ('features.7', MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)),
  39. ('classifier',
  40. Sequential(
  41. (0): Linear(in_features=576, out_features=128, bias=True)
  42. (1): ReLU(inplace)
  43. (2): Dropout(p=0.5)
  44. (3): Linear(in_features=128, out_features=10, bias=True)
  45. )),
  46. ('classifier.0', Linear(in_features=576, out_features=128, bias=True)),
  47. ('classifier.1', ReLU(inplace)),
  48. ('classifier.2', Dropout(p=0.5)),
  49. ('classifier.3', Linear(in_features=128, out_features=10, bias=True))]

可以看出,model.named_modules()也遍历了15个元素,但每个元素都有了自己的名字,从名字可以看出,除了在模型定义时有命名的features和classifier,其它层的名字都是PyTorch内部按一定规则自动命名的。返回层以及层的名字的好处是可以按名字通过迭代的方法修改特定的层,如果在模型定义的时候就给每个层起了名字,比如卷积层都是conv1,conv2...的形式,那么我们可以这样处理:

 

  1. for name, layer in model.named_modules():
  2. if 'conv' in name:
  3. 对layer进行处理

当然,在没有返回名字的情形中,采用isinstance()函数也可以完成上述操作:

  1. for layer in model.modules():
  2. if isinstance(layer, nn.Conv2d):
  3. 对layer进行处理

3. model.children()

如果把这个网络模型Net按层次从外到内进行划分的话,features和classifier是Net的子层,而conv2d, ReLU, BatchNorm, Maxpool2d这些有时features的子层, Linear, Dropout, ReLU等是classifier的子层,上面的model.modules()不但会遍历模型的子层,还会遍历子层的子层,以及所有子层。
而model.children()只会遍历模型的子层,这里即是features和classifier。

  1. In [22]: len(model_children)
  2. Out[22]: 2
  3. In [22]: model_children
  4. Out[22]:
  5. [Sequential(
  6. (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
  7. (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  8. (2): ReLU(inplace)
  9. (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  10. (4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
  11. (5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  12. (6): ReLU(inplace)
  13. (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  14. ),
  15. Sequential(
  16. (0): Linear(in_features=576, out_features=128, bias=True)
  17. (1): ReLU(inplace)
  18. (2): Dropout(p=0.5)
  19. (3): Linear(in_features=128, out_features=10, bias=True)
  20. )]

可以看出,它只遍历了两个元素,即features和classifier。

4. model.named_children()

model.named_children()就是带名字的model.children(), 相比model.children(), model.named_children()不但迭代的遍历模型的子层,还会返回子层的名字:

  1. In [23]: len(model_named_children)
  2. Out[23]: 2
  3. In [24]: model_named_children
  4. Out[24]:
  5. [('features', Sequential(
  6. (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
  7. (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  8. (2): ReLU(inplace)
  9. (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  10. (4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
  11. (5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  12. (6): ReLU(inplace)
  13. (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  14. )),
  15. ('classifier', Sequential(
  16. (0): Linear(in_features=576, out_features=128, bias=True)
  17. (1): ReLU(inplace)
  18. (2): Dropout(p=0.5)
  19. (3): Linear(in_features=128, out_features=10, bias=True)
  20. ))]

对比上面的model.children(), 这里的model.named_children()还返回了两个子层的名称:features 和 classifier .

5. model.parameters()

迭代地返回模型的所有参数。

  1. In [30]: len(model_parameters)
  2. Out[30]: 12
  3. In [31]: model_parameters
  4. Out[31]:
  5. [Parameter containing:
  6. tensor([[[[ 0.1200, -0.1627, -0.0841],
  7. [-0.1369, -0.1525, 0.0541],
  8. [ 0.1203, 0.0564, 0.0908]],
  9. ……
  10. [[-0.1587, 0.0735, -0.0066],
  11. [ 0.0210, 0.0257, -0.0838],
  12. [-0.1797, 0.0675, 0.1282]]]], requires_grad=True),
  13. Parameter containing:
  14. tensor([-0.1251, 0.1673, 0.1241, -0.1876, 0.0683, 0.0346],
  15. requires_grad=True),
  16. Parameter containing:
  17. tensor([0.0072, 0.0272, 0.8620, 0.0633, 0.9411, 0.2971], requires_grad=True),
  18. Parameter containing:
  19. tensor([0., 0., 0., 0., 0., 0.], requires_grad=True),
  20. Parameter containing:
  21. tensor([[[[ 0.0632, -0.1078, -0.0800],
  22. [-0.0488, 0.0167, 0.0473],
  23. [-0.0743, 0.0469, -0.1214]],
  24. ……
  25. [[-0.1067, -0.0851, 0.0498],
  26. [-0.0695, 0.0380, -0.0289],
  27. [-0.0700, 0.0969, -0.0557]]]], requires_grad=True),
  28. Parameter containing:
  29. tensor([-0.0608, 0.0154, 0.0231, 0.0886, -0.0577, 0.0658, -0.1135, -0.0221,
  30. 0.0991], requires_grad=True),
  31. Parameter containing:
  32. tensor([0.2514, 0.1924, 0.9139, 0.8075, 0.6851, 0.4522, 0.5963, 0.8135, 0.4010],
  33. requires_grad=True),
  34. Parameter containing:
  35. tensor([0., 0., 0., 0., 0., 0., 0., 0., 0.], requires_grad=True),
  36. Parameter containing:
  37. tensor([[ 0.0223, 0.0079, -0.0332, ..., -0.0394, 0.0291, 0.0068],
  38. [ 0.0037, -0.0079, 0.0011, ..., -0.0277, -0.0273, 0.0009],
  39. [ 0.0150, -0.0110, 0.0319, ..., -0.0110, -0.0072, -0.0333],
  40. ...,
  41. [-0.0274, -0.0296, -0.0156, ..., 0.0359, -0.0303, -0.0114],
  42. [ 0.0222, 0.0243, -0.0115, ..., 0.0369, -0.0347, 0.0291],
  43. [ 0.0045, 0.0156, 0.0281, ..., -0.0348, -0.0370, -0.0152]],
  44. requires_grad=True),
  45. Parameter containing:
  46. tensor([ 0.0072, -0.0399, -0.0138, 0.0062, -0.0099, -0.0006, -0.0142, -0.0337,
  47. ……
  48. -0.0370, -0.0121, -0.0348, -0.0200, -0.0285, 0.0367, 0.0050, -0.0166],
  49. requires_grad=True),
  50. Parameter containing:
  51. tensor([[-0.0130, 0.0301, 0.0721, ..., -0.0634, 0.0325, -0.0830],
  52. [-0.0086, -0.0374, -0.0281, ..., -0.0543, 0.0105, 0.0822],
  53. [-0.0305, 0.0047, -0.0090, ..., 0.0370, -0.0187, 0.0824],
  54. ...,
  55. [ 0.0529, -0.0236, 0.0219, ..., 0.0250, 0.0620, -0.0446],
  56. [ 0.0077, -0.0576, 0.0600, ..., -0.0412, -0.0290, 0.0103],
  57. [ 0.0375, -0.0147, 0.0622, ..., 0.0350, 0.0179, 0.0667]],
  58. requires_grad=True),
  59. Parameter containing:
  60. tensor([-0.0709, -0.0675, -0.0492, 0.0694, 0.0390, -0.0861, -0.0427, -0.0638,
  61. -0.0123, 0.0845], requires_grad=True)]

6. model.named_parameters()

如果你是从前面看过来的,就会知道,这里就是迭代的返回带有名字的参数,会给每个参数加上带有 .weight或 .bias的名字以区分权重和偏置:

  1. In [32]: len(model_named_parameters)
  2. Out[32]: 12
  3. In [33]: model_named_parameters
  4. Out[33]:
  5. [('features.0.weight', Parameter containing:
  6. tensor([[[[ 0.1200, -0.1627, -0.0841],
  7. [-0.1369, -0.1525, 0.0541],
  8. [ 0.1203, 0.0564, 0.0908]],
  9. ……
  10. [[-0.1587, 0.0735, -0.0066],
  11. [ 0.0210, 0.0257, -0.0838],
  12. [-0.1797, 0.0675, 0.1282]]]], requires_grad=True)),
  13. ('features.0.bias', Parameter containing:
  14. tensor([-0.1251, 0.1673, 0.1241, -0.1876, 0.0683, 0.0346],
  15. requires_grad=True)),
  16. ('features.1.weight', Parameter containing:
  17. tensor([0.0072, 0.0272, 0.8620, 0.0633, 0.9411, 0.2971], requires_grad=True)),
  18. ('features.1.bias', Parameter containing:
  19. tensor([0., 0., 0., 0., 0., 0.], requires_grad=True)),
  20. ('features.4.weight', Parameter containing:
  21. tensor([[[[ 0.0632, -0.1078, -0.0800],
  22. [-0.0488, 0.0167, 0.0473],
  23. [-0.0743, 0.0469, -0.1214]],
  24. ……
  25. [[-0.1067, -0.0851, 0.0498],
  26. [-0.0695, 0.0380, -0.0289],
  27. [-0.0700, 0.0969, -0.0557]]]], requires_grad=True)),
  28. ('features.4.bias', Parameter containing:
  29. tensor([-0.0608, 0.0154, 0.0231, 0.0886, -0.0577, 0.0658, -0.1135, -0.0221,
  30. 0.0991], requires_grad=True)),
  31. ('features.5.weight', Parameter containing:
  32. tensor([0.2514, 0.1924, 0.9139, 0.8075, 0.6851, 0.4522, 0.5963, 0.8135, 0.4010],
  33. requires_grad=True)),
  34. ('features.5.bias', Parameter containing:
  35. tensor([0., 0., 0., 0., 0., 0., 0., 0., 0.], requires_grad=True)),
  36. ('classifier.0.weight', Parameter containing:
  37. tensor([[ 0.0223, 0.0079, -0.0332, ..., -0.0394, 0.0291, 0.0068],
  38. ……
  39. [ 0.0045, 0.0156, 0.0281, ..., -0.0348, -0.0370, -0.0152]],
  40. requires_grad=True)),
  41. ('classifier.0.bias', Parameter containing:
  42. tensor([ 0.0072, -0.0399, -0.0138, 0.0062, -0.0099, -0.0006, -0.0142, -0.0337,
  43. ……
  44. -0.0370, -0.0121, -0.0348, -0.0200, -0.0285, 0.0367, 0.0050, -0.0166],
  45. requires_grad=True)),
  46. ('classifier.3.weight', Parameter containing:
  47. tensor([[-0.0130, 0.0301, 0.0721, ..., -0.0634, 0.0325, -0.0830],
  48. [-0.0086, -0.0374, -0.0281, ..., -0.0543, 0.0105, 0.0822],
  49. [-0.0305, 0.0047, -0.0090, ..., 0.0370, -0.0187, 0.0824],
  50. ...,
  51. [ 0.0529, -0.0236, 0.0219, ..., 0.0250, 0.0620, -0.0446],
  52. [ 0.0077, -0.0576, 0.0600, ..., -0.0412, -0.0290, 0.0103],
  53. [ 0.0375, -0.0147, 0.0622, ..., 0.0350, 0.0179, 0.0667]],
  54. requires_grad=True)),
  55. ('classifier.3.bias', Parameter containing:
  56. tensor([-0.0709, -0.0675, -0.0492, 0.0694, 0.0390, -0.0861, -0.0427, -0.0638,
  57. -0.0123, 0.0845], requires_grad=True))]

7. model.state_dict()

model.state_dict()直接返回模型的字典,和前面几个方法不同的是这里不需要迭代,它本身就是一个字典,可以直接通过修改state_dict来修改模型各层的参数,用于参数剪枝特别方便。详细的state_dict方法,在我的这篇文章中有介绍:PyTorch模型保存深入理解

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Gausst松鼠会/article/detail/160170
推荐阅读
相关标签
  

闽ICP备14008679号