赞
踩
引用:
基础函数!!
Tensor的索引值-1
tensor([[ 0.1000, 1.2000, 2.0000, 3.0000, 4.0000],
[ 2.2000, 3.1000, 5.0000, 6.0000, 7.0000],
[ 4.9000, 5.2000, 8.0000, 9.0000, 10.0000]])
a = torch.tensor([[0.1, 1.2, 2, 3, 4], [2.2, 3.1, 5, 6, 7], [4.9, 5.2, 8, 9, 10]])
a[:, 0]
tensor([0.1000, 2.2000, 4.9000])
a[:, -0]
tensor([0.1000, 2.2000, 4.9000])
a[:, -1]
tensor([ 4., 7., 10.])
显然,-1代表倒数第一个元素.
torch.max
https://blog.csdn.net/Z_lbj/article/details/79766690 详细介绍几种情况.
我讲我遇到的:
tensor([[ 0.1000, 1.2000, 2.0000, 3.0000, 4.0000],
[ 2.2000, 3.1000, 5.0000, 6.0000, 7.0000],
[ 4.9000, 5.2000, 8.0000, 9.0000, 10.0000]])
a = torch.tensor([[0.1, 1.2, 2, 3, 4], [2.2, 3.1, 5, 6, 7], [4.9, 5.2, 8, 9, 10]])
torch.max(a, 1)
(tensor([ 4., 7., 10.]), tensor([4, 4, 4]))
在第一个维度进行最大值查找,按住0维不动,在第一维度比较.
返回两个值,首先是数值,然后是数值的第一维度坐标.
两个返回量,都是Tensor.
torch.min同理
torch.nonzero(tensor) , 返回tensor中非零元素的索引地址.
https://blog.csdn.net/monchin/article/details/79750216
tensor([[ 0.1000, 1.2000, 2.0000, 3.0000, 4.0000], [ 2.2000, 3.1000, 5.0000, 6.0000, 7.0000], [ 4.9000, 5.2000, 8.0000, 9.0000, 10.0000]]) a = torch.tensor([[0.1, 1.2, 2, 3, 4], [2.2, 3.1, 5, 6, 7], [4.9, 5.2, 8, 9, 10]]) torch.nonzero(a) 打印: tensor([[0, 0], [0, 1], [0, 2], [0, 3], [0, 4], [1, 0], [1, 1], [1, 2], [1, 3], [1, 4], [2, 0], [2, 1], [2, 2], [2, 3], [2, 4]])
tensor可以直接用列表部分调用!
tensor([[ 0.1000, 1.2000, 2.0000, 3.0000, 4.0000],
[ 2.2000, 3.1000, 5.0000, 6.0000, 7.0000],
[ 4.9000, 5.2000, 8.0000, 9.0000, 10.0000]])
a = torch.tensor([[0.1, 1.2, 2, 3, 4], [2.2, 3.1, 5, 6, 7], [4.9, 5.2, 8, 9, 10]])
a[1]
tensor([2.2000, 3.1000, 5.0000, 6.0000, 7.0000])
a[1,2]
tensor(5.)
tensor([[ 0.1000, 1.2000, 2.0000, 3.0000, 4.0000],
[ 2.2000, 3.1000, 5.0000, 6.0000, 7.0000],
[ 4.9000, 5.2000, 8.0000, 9.0000, 10.0000]])
a = torch.tensor([[0.1, 1.2, 2, 3, 4], [2.2, 3.1, 5, 6, 7], [4.9, 5.2, 8, 9, 10]])
a[[1, 2]]
tensor([[ 2.2000, 3.1000, 5.0000, 6.0000, 7.0000],
[ 4.9000, 5.2000, 8.0000, 9.0000, 10.0000]])
tensor([[ 0.1000, 1.2000, 2.0000, 3.0000, 4.0000],
[ 2.2000, 3.1000, 5.0000, 6.0000, 7.0000],
[ 4.9000, 5.2000, 8.0000, 9.0000, 10.0000]])
a = torch.tensor([[0.1, 1.2, 2, 3, 4], [2.2, 3.1, 5, 6, 7], [4.9, 5.2, 8, 9, 10]])
a[torch.tensor([1, 2])]
tensor([[ 2.2000, 3.1000, 5.0000, 6.0000, 7.0000],
[ 4.9000, 5.2000, 8.0000, 9.0000, 10.0000]])
torch.sort,对输入张量input沿着指定维按升序排序
torch.sort(input, dim=None, descending=False, out=None) -> (Tensor, LongTensor)
对输入张量input沿着指定维按升序排序。如果不给定dim,则默认为输入的最后一维.
如果指定参数descending为True,则按降序排序.
返回元组 (sorted_tensor, sorted_indices) , sorted_indices 为sorted_tensor,元素在原始输入中的排序维度的下标.
参数:
>>> x = torch.randn(3, 4) >>> sorted, indices = torch.sort(x) >>> sorted -1.6747 0.0610 0.1190 1.4137 -1.4782 0.7159 1.0341 1.3678 -0.3324 -0.0782 0.3518 0.4763 [torch.FloatTensor of size 3x4] >>> indices 0 1 3 2 2 1 0 3 3 1 0 2 [torch.LongTensor of size 3x4] >>> sorted, indices = torch.sort(x, 0) >>> sorted -1.6747 -0.0782 -1.4782 -0.3324 0.3518 0.0610 0.4763 0.1190 1.0341 0.7159 1.4137 1.3678 [torch.FloatTensor of size 3x4] >>> indices 0 2 1 2 2 0 2 0 1 1 0 1 [torch.LongTensor of size 3x4]
使用我自己上面的例子继续解释:
tensor([[ 0.1000, 71.2000, 2.0000, 53.0000, 4.0000],
[ 2.2000, 23.1000, 15.0000, 6.0000, 7.0000],
[ 4.9000, 52.2000, 86.0000, 9.0000, 10.0000]])
a = torch.tensor([[0.1, 71.2, 2, 53, 4], [2.2, 23.1, 15, 6, 7], [4.9, 52.2, 86, 9, 10]])
torch.sort(a)
输出1:排序后的tensor.
(tensor([[ 0.1000, 2.0000, 4.0000, 53.0000, 71.2000],
[ 2.2000, 6.0000, 7.0000, 15.0000, 23.1000],
[ 4.9000, 9.0000, 10.0000, 52.2000, 86.0000]]),
输出2:排序后tensor数字,在排序维度一(比如这里默认最后一个维度,这里为1),在排序前的tensor中的索引.
tensor([[0, 2, 4, 3, 1],
[0, 3, 4, 2, 1],
[0, 3, 4, 1, 2]]))
tensor_1.new(shape)创建新tensor_2操作:
我认为目的是:获取和原tensor_1相同设定的tensor_2,如cuda.
两个的值没有特别联系,一般会给新tensor_2赋新的值!
tensor([[ 0.1000, 71.2000, 2.0000, 53.0000, 4.0000], [ 2.2000, 23.1000, 15.0000, 6.0000, 7.0000], [ 4.9000, 52.2000, 86.0000, 9.0000, 10.0000]]) a = torch.tensor([[0.1, 71.2, 2, 53, 4], [2.2, 23.1, 15, 6, 7], [4.9, 52.2, 86, 9, 10]]) # 第一种shape a.new(a.shape[0], a.shape[1]) tensor([[5.0761e-38, 0.0000e+00, 5.7453e-44, 0.0000e+00, nan], [4.5716e-41, 1.3733e-14, 6.4076e+07, 2.0706e-19, 7.3909e+22], [2.4176e-12, 1.1625e+33, 8.9605e-01, 1.1632e+33, 5.6003e-02]]) #第二种shape a.new(a.shape) tensor([[1.4975e-21, 4.5716e-41, 6.8953e-38, 0.0000e+00, nan], [4.5716e-41, 1.3733e-14, 9.5680e+20, 7.2065e+31, 2.6301e+20], [1.4601e-19, 6.4069e+02, 4.3066e+21, 1.1824e+22, 4.3066e+21]]) # 第三种 a.new(a.size(0), 1) tensor([[1.4975e-21], [4.5716e-41], [5.0763e-38]]) # 赋值,给他我们想要的数字,这里fill为0 a.new(a.size(0), 1).fill_(0) tensor([[0.], [0.], [0.]])
torch的原地操作:
torch.index_select(input, dim, index, out=None):
沿指定维度对输入按照index进行切片,取index中指定的相应项,然后返回一个新的张量,返回的张量与原始张量有相同的维度(在指定轴上),返回的张量与原始张量不共享内存空间.
a = torch.tensor([[0.1, 71.2, 2, 53, 4], [2.2, 23.1, 15, 6, 7], [4.9, 52.2, 86, 9, 10]]) a: tensor([[ 0.1000, 71.2000, 2.0000, 53.0000, 4.0000], [ 2.2000, 23.1000, 15.0000, 6.0000, 7.0000], [ 4.9000, 52.2000, 86.0000, 9.0000, 10.0000]]) indices = torch.LongTensor([0, 2]) indices: tensor([0, 2]) # 对张量a的0维进行切片 torch.index_select(a, 0, indices) tensor([[ 0.1000, 71.2000, 2.0000, 53.0000, 4.0000], [ 4.9000, 52.2000, 86.0000, 9.0000, 10.0000]]) # 对张量a的1维进行切片 torch.index_select(a, 1, indices) tensor([[ 0.1000, 2.0000], [ 2.2000, 15.0000], [ 4.9000, 86.0000]])
注:可以多次切同一个地方,比如
indices = torch.LongTensor([0, 0, 2])
indices
tensor([0, 0, 2])
# 注意,取了两次0维的第0索引!
torch.index_select(a, 0, indices)
tensor([[ 0.1000, 71.2000, 2.0000, 53.0000, 4.0000],
[ 0.1000, 71.2000, 2.0000, 53.0000, 4.0000],
[ 4.9000, 52.2000, 86.0000, 9.0000, 10.0000]])
nn.Conv2d , function.conv2d 都有的卷积参数 :
贴代码:
def xcorr(self, z, x): #
batch_size, _, H, W = x.shape
x = torch.reshape(x, (1, -1, H, W)) # 3X32 -> 1X96
out = F.conv2d(x, z, groups=batch_size) # z作为filter,filter数为3,输出通道32,groups输入channel分组卷积!
xcorr_out = out.transpose(0,1) # (input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1),out=1X3
return xcorr_out
x : torch.Size([1, 96, 49, 49])
z : torch.Size([3, 32, 17, 17])
out: torch.Size([1, 3, 33, 33])
F.conv2d(x, z, groups=batch_size)将z作为卷积核,x作为输入!
group=3,将x的channel分为三组:
x1: torch.Size([1, 32, 49, 49])
x2: torch.Size([1, 32, 49, 49])
x3:torch.Size([1, 32, 49, 49])
groups 决定了将原输入channel分为几组,而每组重复用几次,由out_channels/groups计算得到,这也说明了为什么需要groups能供被 out_channels与in_channels整除。
注:out_channels就是filter_z的shape[0],即有几个fliter,这里为3.
1.
将z也分组,z.shape= torch.Size([3, 32, 17, 17])
z1: torch.Size([1, 32, 17, 17])
z2: torch.Size([1, 32, 17, 17])
z3:torch.Size([1, 32, 17, 17])
2.
输入分组以后,每组重复计算out_channels/groups次!这里为 3/3=1.
所以,这里的计算关系就是:
x的每个分组只执行一次,每个分组吃掉一个z分组:
x1 * z1 --> torch.Size([1, 32, 49, 49]) * torch.Size([1, 32, 17, 17]) = ([1, 1, 33, 33])
x2 * z2 --> torch.Size([1, 32, 49, 49]) * torch.Size([1, 32, 17, 17]) = ([1, 1, 33, 33])
x3 * z3 --> torch.Size([1, 32, 49, 49]) * torch.Size([1, 32, 17, 17]) = ([1, 1, 33, 33])
按第二轴拼接:
contiguous:view只能用在contiguous的variable上。如果在view之前用了transpose, permute等,需要用contiguous()来返回一个contiguous copy。
delta = delta.permute(1, 2, 3, 0).contiguous().view(4, -1).data.cpu().numpy()
# permute维度换位,view之前最好先contiguous,view之前用了transpose, permute等,需要用contiguous()来返回一个contiguous copy
nn.Conv2d(in_channels, out_channels, kernel_size, stride=1,
padding=0, dilation=1, groups=1, bias=True)
默认dilation = 1 , 表示普通卷积.
dilation = 2 , 表示间隔为1空洞卷积.
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。