赞
踩
文通过一个例子实验来观察并讲解PyTorch中model.modules(), model.named_modules(), model.children(), model.named_children(), model.parameters(), model.named_parameters(), model.state_dict()这些model实例方法的返回值。例子如下:
- import torch
- import torch.nn as nn
-
- class Net(nn.Module):
-
- def __init__(self, num_class=10):
- super().__init__()
- self.features = nn.Sequential(
- nn.Conv2d(in_channels=3, out_channels=6, kernel_size=3),
- nn.BatchNorm2d(6),
- nn.ReLU(inplace=True),
- nn.MaxPool2d(kernel_size=2, stride=2),
- nn.Conv2d(in_channels=6, out_channels=9, kernel_size=3),
- nn.BatchNorm2d(9),
- nn.ReLU(inplace=True),
- nn.MaxPool2d(kernel_size=2, stride=2)
- )
-
- self.classifier = nn.Sequential(
- nn.Linear(9*8*8, 128),
- nn.ReLU(inplace=True),
- nn.Dropout(),
- nn.Linear(128, num_class)
- )
-
- def forward(self, x):
- output = self.features(x)
- output = output.view(output.size()[0], -1)
- output = self.classifier(output)
-
- return output
-
- model = Net()
如上代码定义了一个由两层卷积层,两层全连接层组成的网络模型。值得注意的是,这个Net由外到内有3个层次:
Net: ----features: ------------Conv2d ------------BatchNorm2d ------------ReLU ------------MaxPool2d ------------Conv2d ------------BatchNorm2d ------------ReLU ------------MaxPool2d ----classifier: ------------Linear ------------ReLU ------------Dropout ------------Linear
网络Net本身是一个nn.Module的子类,它又包含了features和classifier两个由Sequential容器组成的nn.Module子类,features和classifier各自又包含众多的网络层,它们都属于nn.Module子类,所以从外到内共有3个层次。
下面我们来看这几个实例方法的返回值都是什么。
- In [7]: model.named_modules()
- Out[7]: <generator object Module.named_modules at 0x7f5db88f3840>
-
- In [8]: model.modules()
- Out[8]: <generator object Module.modules at 0x7f5db3f53c00>
-
- In [9]: model.children()
- Out[9]: <generator object Module.children at 0x7f5db3f53408>
-
- In [10]: model.named_children()
- Out[10]: <generator object Module.named_children at 0x7f5db80305e8>
-
- In [11]: model.parameters()
- Out[11]: <generator object Module.parameters at 0x7f5db3f534f8>
-
- In [12]: model.named_parameters()
- Out[12]: <generator object Module.named_parameters at 0x7f5d42da7570>
-
- In [13]: model.state_dict()
- Out[13]:
- OrderedDict([('features.0.weight', tensor([[[[ 0.1200, -0.1627, -0.0841],
- [-0.1369, -0.1525, 0.0541],
- [ 0.1203, 0.0564, 0.0908]],
- ……
-
可以看出,除了model.state_dict()返回的是一个字典,其他几个方法返回值都显示的是一个生成器,是一个可迭代变量,我们通过列表推导式用for循环将返回值取出来进一步进行观察:
- In [14]: model_modules = [x for x in model.modules()]
-
- In [15]: model_named_modules = [x for x in model.named_modules()]
-
- In [16]: model_children = [x for x in model.children()]
-
- In [17]: model_named_children = [x for x in model.named_children()]
-
- In [18]: model_parameters = [x for x in model.parameters()]
-
- In [19]: model_named_parameters = [x for x in model.named_parameters()]
1. model.modules()
model.modules()迭代遍历模型的所有子层,所有子层即指nn.Module子类,在本文的例子中,Net(), features(), classifier(),以及nn.xxx构成的卷积,池化,ReLU, Linear, BN, Dropout等都是nn.Module子类,也就是model.modules()会迭代的遍历它们所有对象。我们看一下列表model_modules:
- In [20]: model_modules
- Out[20]:
- [Net(
- (features): Sequential(
- (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
- (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
- (2): ReLU(inplace)
- (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
- (5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
- (6): ReLU(inplace)
- (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- )
- (classifier): Sequential(
- (0): Linear(in_features=576, out_features=128, bias=True)
- (1): ReLU(inplace)
- (2): Dropout(p=0.5)
- (3): Linear(in_features=128, out_features=10, bias=True)
- )
- ),
- Sequential(
- (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
- (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
- (2): ReLU(inplace)
- (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
- (5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
- (6): ReLU(inplace)
- (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- ),
- Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1)),
- BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),
- ReLU(inplace),
- MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False),
- Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1)),
- BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),
- ReLU(inplace),
- MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False),
- Sequential(
- (0): Linear(in_features=576, out_features=128, bias=True)
- (1): ReLU(inplace)
- (2): Dropout(p=0.5)
- (3): Linear(in_features=128, out_features=10, bias=True)
- ),
- Linear(in_features=576, out_features=128, bias=True),
- ReLU(inplace),
- Dropout(p=0.5),
- Linear(in_features=128, out_features=10, bias=True)]
-
- In [21]: len(model_modules)
- Out[21]: 15
-
可以看出,model_modules列表中共有15个元素,首先是整个Net,然后遍历了Net下的features子层,进一步遍历了feature下的所有层,然后又遍历了classifier子层以及其下的所有层。所以说model.modules()能够迭代地遍历模型的所有子层。
2. model.named_modules()
顾名思义,它就是有名字的model.modules()。model.named_modules()不但返回模型的所有子层,还会返回这些层的名字:
- In [28]: len(model_named_modules)
- Out[28]: 15
-
- In [29]: model_named_modules
- Out[29]:
- [('', Net(
- (features): Sequential(
- (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
- (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
- (2): ReLU(inplace)
- (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
- (5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
- (6): ReLU(inplace)
- (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- )
- (classifier): Sequential(
- (0): Linear(in_features=576, out_features=128, bias=True)
- (1): ReLU(inplace)
- (2): Dropout(p=0.5)
- (3): Linear(in_features=128, out_features=10, bias=True)
- )
- )),
- ('features', Sequential(
- (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
- (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
- (2): ReLU(inplace)
- (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
- (5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
- (6): ReLU(inplace)
- (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- )),
- ('features.0', Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))),
- ('features.1', BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)), ('features.2', ReLU(inplace)),
- ('features.3', MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)),
- ('features.4', Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))),
- ('features.5', BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)), ('features.6', ReLU(inplace)),
- ('features.7', MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)),
- ('classifier',
- Sequential(
- (0): Linear(in_features=576, out_features=128, bias=True)
- (1): ReLU(inplace)
- (2): Dropout(p=0.5)
- (3): Linear(in_features=128, out_features=10, bias=True)
- )),
- ('classifier.0', Linear(in_features=576, out_features=128, bias=True)),
- ('classifier.1', ReLU(inplace)),
- ('classifier.2', Dropout(p=0.5)),
- ('classifier.3', Linear(in_features=128, out_features=10, bias=True))]
-
可以看出,model.named_modules()也遍历了15个元素,但每个元素都有了自己的名字,从名字可以看出,除了在模型定义时有命名的features和classifier,其它层的名字都是PyTorch内部按一定规则自动命名的。返回层以及层的名字的好处是可以按名字通过迭代的方法修改特定的层,如果在模型定义的时候就给每个层起了名字,比如卷积层都是conv1,conv2...的形式,那么我们可以这样处理:
- for name, layer in model.named_modules():
- if 'conv' in name:
- 对layer进行处理
当然,在没有返回名字的情形中,采用isinstance()函数也可以完成上述操作:
- for layer in model.modules():
- if isinstance(layer, nn.Conv2d):
- 对layer进行处理
3. model.children()
如果把这个网络模型Net按层次从外到内进行划分的话,features和classifier是Net的子层,而conv2d, ReLU, BatchNorm, Maxpool2d这些有时features的子层, Linear, Dropout, ReLU等是classifier的子层,上面的model.modules()不但会遍历模型的子层,还会遍历子层的子层,以及所有子层。
而model.children()只会遍历模型的子层,这里即是features和classifier。
- In [22]: len(model_children)
- Out[22]: 2
-
- In [22]: model_children
- Out[22]:
- [Sequential(
- (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
- (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
- (2): ReLU(inplace)
- (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
- (5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
- (6): ReLU(inplace)
- (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- ),
- Sequential(
- (0): Linear(in_features=576, out_features=128, bias=True)
- (1): ReLU(inplace)
- (2): Dropout(p=0.5)
- (3): Linear(in_features=128, out_features=10, bias=True)
- )]
-
可以看出,它只遍历了两个元素,即features和classifier。
4. model.named_children()
model.named_children()就是带名字的model.children(), 相比model.children(), model.named_children()不但迭代的遍历模型的子层,还会返回子层的名字:
- In [23]: len(model_named_children)
- Out[23]: 2
-
- In [24]: model_named_children
- Out[24]:
- [('features', Sequential(
- (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
- (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
- (2): ReLU(inplace)
- (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
- (5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
- (6): ReLU(inplace)
- (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- )),
- ('classifier', Sequential(
- (0): Linear(in_features=576, out_features=128, bias=True)
- (1): ReLU(inplace)
- (2): Dropout(p=0.5)
- (3): Linear(in_features=128, out_features=10, bias=True)
- ))]
-
对比上面的model.children(), 这里的model.named_children()还返回了两个子层的名称:features 和 classifier .
5. model.parameters()
迭代地返回模型的所有参数。
- In [30]: len(model_parameters)
- Out[30]: 12
-
- In [31]: model_parameters
- Out[31]:
- [Parameter containing:
- tensor([[[[ 0.1200, -0.1627, -0.0841],
- [-0.1369, -0.1525, 0.0541],
- [ 0.1203, 0.0564, 0.0908]],
- ……
- [[-0.1587, 0.0735, -0.0066],
- [ 0.0210, 0.0257, -0.0838],
- [-0.1797, 0.0675, 0.1282]]]], requires_grad=True),
- Parameter containing:
- tensor([-0.1251, 0.1673, 0.1241, -0.1876, 0.0683, 0.0346],
- requires_grad=True),
- Parameter containing:
- tensor([0.0072, 0.0272, 0.8620, 0.0633, 0.9411, 0.2971], requires_grad=True),
- Parameter containing:
- tensor([0., 0., 0., 0., 0., 0.], requires_grad=True),
- Parameter containing:
- tensor([[[[ 0.0632, -0.1078, -0.0800],
- [-0.0488, 0.0167, 0.0473],
- [-0.0743, 0.0469, -0.1214]],
- ……
- [[-0.1067, -0.0851, 0.0498],
- [-0.0695, 0.0380, -0.0289],
- [-0.0700, 0.0969, -0.0557]]]], requires_grad=True),
- Parameter containing:
- tensor([-0.0608, 0.0154, 0.0231, 0.0886, -0.0577, 0.0658, -0.1135, -0.0221,
- 0.0991], requires_grad=True),
- Parameter containing:
- tensor([0.2514, 0.1924, 0.9139, 0.8075, 0.6851, 0.4522, 0.5963, 0.8135, 0.4010],
- requires_grad=True),
- Parameter containing:
- tensor([0., 0., 0., 0., 0., 0., 0., 0., 0.], requires_grad=True),
- Parameter containing:
- tensor([[ 0.0223, 0.0079, -0.0332, ..., -0.0394, 0.0291, 0.0068],
- [ 0.0037, -0.0079, 0.0011, ..., -0.0277, -0.0273, 0.0009],
- [ 0.0150, -0.0110, 0.0319, ..., -0.0110, -0.0072, -0.0333],
- ...,
- [-0.0274, -0.0296, -0.0156, ..., 0.0359, -0.0303, -0.0114],
- [ 0.0222, 0.0243, -0.0115, ..., 0.0369, -0.0347, 0.0291],
- [ 0.0045, 0.0156, 0.0281, ..., -0.0348, -0.0370, -0.0152]],
- requires_grad=True),
- Parameter containing:
- tensor([ 0.0072, -0.0399, -0.0138, 0.0062, -0.0099, -0.0006, -0.0142, -0.0337,
- ……
- -0.0370, -0.0121, -0.0348, -0.0200, -0.0285, 0.0367, 0.0050, -0.0166],
- requires_grad=True),
- Parameter containing:
- tensor([[-0.0130, 0.0301, 0.0721, ..., -0.0634, 0.0325, -0.0830],
- [-0.0086, -0.0374, -0.0281, ..., -0.0543, 0.0105, 0.0822],
- [-0.0305, 0.0047, -0.0090, ..., 0.0370, -0.0187, 0.0824],
- ...,
- [ 0.0529, -0.0236, 0.0219, ..., 0.0250, 0.0620, -0.0446],
- [ 0.0077, -0.0576, 0.0600, ..., -0.0412, -0.0290, 0.0103],
- [ 0.0375, -0.0147, 0.0622, ..., 0.0350, 0.0179, 0.0667]],
- requires_grad=True),
- Parameter containing:
- tensor([-0.0709, -0.0675, -0.0492, 0.0694, 0.0390, -0.0861, -0.0427, -0.0638,
- -0.0123, 0.0845], requires_grad=True)]
-
6. model.named_parameters()
如果你是从前面看过来的,就会知道,这里就是迭代的返回带有名字的参数,会给每个参数加上带有 .weight或 .bias的名字以区分权重和偏置:
- In [32]: len(model_named_parameters)
- Out[32]: 12
-
- In [33]: model_named_parameters
- Out[33]:
- [('features.0.weight', Parameter containing:
- tensor([[[[ 0.1200, -0.1627, -0.0841],
- [-0.1369, -0.1525, 0.0541],
- [ 0.1203, 0.0564, 0.0908]],
- ……
- [[-0.1587, 0.0735, -0.0066],
- [ 0.0210, 0.0257, -0.0838],
- [-0.1797, 0.0675, 0.1282]]]], requires_grad=True)),
- ('features.0.bias', Parameter containing:
- tensor([-0.1251, 0.1673, 0.1241, -0.1876, 0.0683, 0.0346],
- requires_grad=True)),
- ('features.1.weight', Parameter containing:
- tensor([0.0072, 0.0272, 0.8620, 0.0633, 0.9411, 0.2971], requires_grad=True)),
- ('features.1.bias', Parameter containing:
- tensor([0., 0., 0., 0., 0., 0.], requires_grad=True)),
- ('features.4.weight', Parameter containing:
- tensor([[[[ 0.0632, -0.1078, -0.0800],
- [-0.0488, 0.0167, 0.0473],
- [-0.0743, 0.0469, -0.1214]],
- ……
- [[-0.1067, -0.0851, 0.0498],
- [-0.0695, 0.0380, -0.0289],
- [-0.0700, 0.0969, -0.0557]]]], requires_grad=True)),
- ('features.4.bias', Parameter containing:
- tensor([-0.0608, 0.0154, 0.0231, 0.0886, -0.0577, 0.0658, -0.1135, -0.0221,
- 0.0991], requires_grad=True)),
- ('features.5.weight', Parameter containing:
- tensor([0.2514, 0.1924, 0.9139, 0.8075, 0.6851, 0.4522, 0.5963, 0.8135, 0.4010],
- requires_grad=True)),
- ('features.5.bias', Parameter containing:
- tensor([0., 0., 0., 0., 0., 0., 0., 0., 0.], requires_grad=True)),
- ('classifier.0.weight', Parameter containing:
- tensor([[ 0.0223, 0.0079, -0.0332, ..., -0.0394, 0.0291, 0.0068],
- ……
- [ 0.0045, 0.0156, 0.0281, ..., -0.0348, -0.0370, -0.0152]],
- requires_grad=True)),
- ('classifier.0.bias', Parameter containing:
- tensor([ 0.0072, -0.0399, -0.0138, 0.0062, -0.0099, -0.0006, -0.0142, -0.0337,
- ……
- -0.0370, -0.0121, -0.0348, -0.0200, -0.0285, 0.0367, 0.0050, -0.0166],
- requires_grad=True)),
- ('classifier.3.weight', Parameter containing:
- tensor([[-0.0130, 0.0301, 0.0721, ..., -0.0634, 0.0325, -0.0830],
- [-0.0086, -0.0374, -0.0281, ..., -0.0543, 0.0105, 0.0822],
- [-0.0305, 0.0047, -0.0090, ..., 0.0370, -0.0187, 0.0824],
- ...,
- [ 0.0529, -0.0236, 0.0219, ..., 0.0250, 0.0620, -0.0446],
- [ 0.0077, -0.0576, 0.0600, ..., -0.0412, -0.0290, 0.0103],
- [ 0.0375, -0.0147, 0.0622, ..., 0.0350, 0.0179, 0.0667]],
- requires_grad=True)),
- ('classifier.3.bias', Parameter containing:
- tensor([-0.0709, -0.0675, -0.0492, 0.0694, 0.0390, -0.0861, -0.0427, -0.0638,
- -0.0123, 0.0845], requires_grad=True))]
-
7. model.state_dict()
model.state_dict()直接返回模型的字典,和前面几个方法不同的是这里不需要迭代,它本身就是一个字典,可以直接通过修改state_dict来修改模型各层的参数,用于参数剪枝特别方便。详细的state_dict方法,在我的这篇文章中有介绍:PyTorch模型保存深入理解
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。