赞
踩
transpose一次只能操作矩阵的两个维度,只接收两个维度的参数。
permute可以同时对多个维度进行操作
a=torch.rand(2,3,4)#torch.Size([2,3,4])
b=a.transpose(0,1)#torch.Size([3,2,4])
c=a.transpose(0,1).transpose(1,2)#torch.Size([3,4,2])
d=a.permute(1,2,0)#torch.Size([3,4,2])
cat是对数据沿着某一个维度对seq中的Tensor进行拼接,操作之后总维数不变,只在统一维度进行拼接。
torch.cat(seq, dim=0, out=None) → Tensor
参数:
除了在拼接的维度外,其余维度必须相等才能拼接。
a=torch.rand(2,3)
b=torch.rand(4,3)
c=torch.cat((a,b),axis=0)#torch.Size([6,3])
stack是在增加新的维度后进行“堆叠”,而不是直接拼接。会增加维度。
在对数据进行加载组合为一个batch时常常用到。
a=torch.rand([2,224,224])
b=torch.rand([3,224,224])
c=torch.stack((a,b),0)#torch.Size([2,3,224,224])
d=torch.stack((a,b),3)#torch.Size([3,224,224,2])
squeeze(dim_n)压缩,去掉元素数为1的dim_n的维度。
torch.squeeze(input, dim=None, out=None) → Tensor
参数:
unsqueeze(dim_n)增加dim_n维度,元素数目为1,与squeeze相反。
a=torch.rand(2,1,4)#torch.Size([2,1,4])
b=a.squeeze()#torch.Size([2,4]),去掉了元素个数为1的第二个维度
c=a.squeeze(1)#torch.Size([2,4]),同上
d=a.squeeze(2)#torch.Size([2,1,4]),第三个维度的元素个数不为1,不能去掉
e=a.unsqueeze(0)#torch.Size([1,2,1,4]),新增了第1个维度
torch.Tensor.expand(sizes) → Tensor
扩大张量,并且将张量复制使之变为对应size大小
x=torch.Tensor([[1],[2],[3]])#torch.Size([3,1])
x.expand(3,4)#torch.Size([3,4]),[[1,1,1,1],[2,2,2,2],[3,3,3,3]]
torch.Tensor.expand_as(size)
扩大张量,并且将张量变为对应Tensor的大小
a=torch.Tensor([[1],[2],[3]])
b=torch.Tensor([[4]])
c=b.expand_as(a)
#tensor([[4],[4],[4]])
contiguous:view只能用在contiguous的variable上。如果在view之前用了transpose, permute等,需要用contiguous()来返回一个contiguous copy。 因为view需要tensor的内存是整块的。有些tensor并不是占用一整块内存,而是由不同的数据块组成,而tensor的view()操作依赖于内存是整块的,这时只需要执行contiguous()这个函数,把tensor变成在内存中连续分布的形式。判断是否contiguous用torch.Tensor.is_contiguous()函数
x=torch.ones(10,10)
x.is_contiguous() #True
x.transpose(0,1).is_contiguous() #False
x.transpose(0, 1).contiguous().is_contiguous() #True
因此,在调用view之前最好先contiguous一下,x.contiguous().view() 。
参数:
返回具有相同数据,但是形状与args所述相同的新的Tensor
a=torch.rand(4,4)#torch.Size([4,4])
b=x.view(16)#torch.Size([16])
参数:若干其他层
torch.nn.Sequential 其实就是 Sequential 容器,该容器将一系列操作按先后顺序给包起
来,方便重复使用。将若干简单的层组合起来,方便结构显示与重用。
nn.Sequential(
nn.Conv2d(in_dim,6,3,stride=1,padding=1),
nn.ReLU(True),
nn.MaxPool2d(2,2),
nn.Conv2d(6,16,5,stride=1,padding=0),
nn.ReLU(True),
nn.MaxPool2d(2,2)
)
参数:输入维度,输出维度
nn.Linear(dim_in,dim_out)
y = x W T + b y=xW^{T}+b y=xWT+b
作用:全连接
import torch
x = torch.randn(128, 20) # 输入的维度是(128,20)
m = torch.nn.Linear(20, 30) # 20,30是指维度
output = m(x)
print('m.weight.shape:\n ', m.weight.shape)
print('m.bias.shape:\n', m.bias.shape)
print('output.shape:\n', output.shape)
# ans = torch.mm(input,torch.t(m.weight))+m.bias 等价于下面的
ans = torch.mm(x, m.weight.t()) + m.bias#y=x*W^T+b,weight要转置
print('ans.shape:\n', ans.shape)
print(torch.equal(ans, output))
m.weight.shape:
torch.Size([30, 20])
m.bias.shape:
torch.Size([30])
output.shape:
torch.Size([128, 30])
ans.shape:
torch.Size([128, 30])
True
nn.Linear()与nn.Conv1d()在kernel_size=1时等价,但是nn.Linear()启动更快
参数:in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True
输入大小、输出大小、卷积核个数groups、卷积核大小kernel_size、滑动步长stride和填充量padding
kernel_size, stride, padding, dilation 不但可以是一个单个的int——表示在高度和宽度使用这个相同的int作为参数
nn.Conv2d(6, 16, 5, stride=1, padding=0)
nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True)
nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True)
nn.BatchNorm3d(num_features, eps=1e-05, momentum=0.1, affine=True)
nn.InstanceNorm1d(num_features, eps=1e-05, momentum=0.1, affine=False)
nn.InstanceNorm2d(num_features, eps=1e-05, momentum=0.1, affine=False)
nn.InstanceNorm3d(num_features, eps=1e-05, momentum=0.1, affine=False)
nn.MaxPl1d(knl_iz, tid=Nn, padding=0, dilatin=1, tn_indi=Fal, il_md=Fal) nn.MaxPl2d(knl_iz, tid=Nn, padding=0, dilatin=1, tn_indi=Fal, il_md=Fal) nn.MaxPl3d(knl_iz, tid=Nn, padding=0, dilatin=1, tn_indi=Fal, il_md=Fal) nn.Maxnpl1d(knl_iz, tid=Nn, padding=0) nn.Maxnpl2d(knl_iz, tid=Nn, padding=0) nn.Maxnpl3d(knl_iz, tid=Nn, padding=0) nn.AvgPl1d(knl_iz, tid=Nn, padding=0, il_md=Fal, nt_inld_pad=T) nn.AvgPl2d(knl_iz, tid=Nn, padding=0, il_md=Fal, nt_inld_pad=T) nn.AvgPl3d(knl_iz, tid=Nn, padding=0, il_md=Fal, nt_inld_pad=T) nn.FatinalMaxPl2d(knl_iz, tpt_iz=Nn, tpt_ati=Nn, tn_indi=Fal, _andm_ampl=Nn) nn.LPPl2d(nm_typ, knl_iz, tid=Nn, il_md=Fal) nn.AdaptivMaxPl1d(tpt_iz, tn_indi=Fal) nn.AdaptivMaxPl2d(tpt_iz, tn_indi=Fal) nn.AdaptivMaxPl3d(tpt_iz, tn_indi=Fal) nn.AdaptivAvgPl1d(tpt_iz) nn.AdaptivAvgPl2d(tpt_iz) nn.AdaptivAvgPl3d(tpt_iz)
nn.Dropout(p=0.5, inplace=False)
nn.Dropout2d(p=0.5, inplace=False)
nn.Dropout3d(p=0.5, inplace=False)
nn.AlphaDropout(p=0.5)
nn.ReLU(inplace=False) nn.ReLU6(inplace=False) nn.ELU(alpha=1.0, inplace=False) nn.SELU(inplace=False) nn.PReLU(num_parameters=1, init=0.25) nn.LeakyReLU(negative_slope=0.01, inplace=False) nn.Threshold(threshold, value, inplace=False) nn.Hardtanh(min_val=-1, max_val=1, inplace=False, min_value=None, max_value=None) nn.Sigmoid nn.LogSigmoid nn.Tanh nn.Tanhshrink nn.Softplus(beta=1, threshold=20) nn.Softmax(dim=None) nn.LogSoftmax(dim=None) nn.Softmax2d nn.Softmin(dim=None) nn.Softshrink(lambd=0.5) nn.Softsign
nn.L1Loss(size_average=True, reduce=True)
nn.MSELoss(size_average=True, reduce=True)
nn.CrossEntropyLoss(weight=None, size_average=True, ignore_index=-100, reduce=True)
nn.NLLLoss(weight=None, size_average=True, ignore_index=-100, reduce=True)
nn.PoissonNLLLoss(log_input=True, full=False, size_average=True, eps=1e-08)
nn.NLLLoss2d(weight=None, size_average=True, ignore_index=-100, reduce=True)
nn.KLDivLoss(size_average=True, reduce=True)
nn.BCELoss(weight=None, size_average=True)
nn.BCEWithLogitsLoss(weight=None, size_average=True)
nn.MarginRankingLoss(margin=0, size_average=True)
nn.HingeEmbeddingLoss(margin=1.0, size_average=True)
nn.MultiLabelMarginLoss(size_average=True)
nn.SmoothL1Loss(size_average=True, reduce=True)
nn.SoftMarginLoss(size_average=True)
nn.CosineEmbeddingLoss(margin=0, size_average=True)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。