当前位置:   article > 正文

Pytorch:Tensor的统计属性【norm:范数】、【mean、sum】【prod:累乘】【argmin、argmax:最值索引】【keepdim】【kthvalue:第k小的元素、topk】_tensor.norm

tensor.norm

范数1:所有元素的绝对值的求和
范数2:所有元素的绝对值的平方和的开方
max():矩阵中的最大值以及相应的index
min():矩阵中的最小值以及相应的index
mean():平均值
prod():累乘
sum():求和
argmax():返回最大值的index
argmin():返回最小值的index
argmax与argmin不带参数的话,会将矩阵先打平之后再寻找最值的index,这样找到的index一定只是一维的,而不是打平之前的index
argmax(dim=1)来指定进行比较的方向

一、norm:范数

范数1:所有元素的绝对值的求和

范数2:所有元素的绝对值的平方和的开方

a = torch.full([8],1)
b = a.view(2,4)
c = a.view(2,2,2)

a.norm(1),b.norm(1),c.norm(1)
#都是tensor(8)

a.norm(2),b.norm(2),c.norm(2)
#都是tensor(2.8284)  根号8
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

在指定的维数上面进行norm的查看,取哪个维度的范数,该维度就会被消掉。

a = torch.full([8],1)
b = a.view(2,4)
c = a.view(2,2,2)

b.norm(1,dim=1)
#tensor([4,4])  一共两行 按行取
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

在这里插入图片描述

二、mean、sum、prod、max、min、argmin、argmax

argmin、argmax:不加参数时,将矩阵打平为一维,然后取最小值、最大值的索引;
在这里插入图片描述
在这里插入图片描述

import torch

a = torch.randn(4, 6)

print('a.shape = ', a.shape)
print('a = ', a)

a_argmax = a.argmax()
a_argmax_dim_0 = a.argmax(dim=0)
a_argmax_dim_1 = a.argmax(dim=1)

print('\na_argmax = ', a_argmax)
print('a_argmax_dim_0 = ', a_argmax_dim_0)
print('a_argmax_dim_1 = ', a_argmax_dim_1)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

打印结果:

a.shape =  torch.Size([4, 6])
a =  tensor([[ 0.4317, -0.9791, -1.8569,  0.9546,  1.5674, -0.7068],
  		      [ 0.4386, -1.5702,  0.9274,  0.9188, -1.3072, -1.2542],
   		     [ 0.7009, -0.1240,  1.0909,  0.5051,  1.4061, -0.7690],
   		     [-0.7982,  0.3734,  0.9044,  0.8111, -0.1462,  0.0558]])

a_argmax =  tensor(4)
a_argmax_dim_0 =  tensor([2, 3, 2, 0, 0, 3])
a_argmax_dim_1 =  tensor([4, 2, 4, 2])
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

三、keepdim

在这里插入图片描述

四、topk、kthvalue

1、topk:前k个最值

import torch

a = torch.randn(4, 6)

print('a.shape = ', a.shape)
print('a = ', a)

a_topk = a.topk(3, dim=1)
print('\na_topk = ', a_topk)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

打印结果:

a.shape =  torch.Size([4, 6])
a =  tensor([[ 0.5713, -0.8378, -0.9350,  1.4258,  1.0426, -1.3174],
      		  [-1.5093, -0.9687,  1.1797,  0.9376, -0.2584,  0.3872],
    		    [ 1.7028, -1.4444, -0.5496,  0.0120, -0.2293, -1.5676],
   		     [-0.0805,  1.0818, -1.0642, -1.1229,  0.1533, -0.9521]])

a_topk =  torch.return_types.topk(
values=tensor([[ 1.4258,  1.0426,  0.5713],
  		      [ 1.1797,  0.9376,  0.3872],
  		      [ 1.7028,  0.0120, -0.2293],
  		      [ 1.0818,  0.1533, -0.0805]]),
indices=tensor([[3, 4, 0],
    		    [2, 3, 5],
    		    [0, 3, 4],
    		    [1, 4, 0]]))

Process finished with exit code 0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

2、kthvalue:排序后第k小的元素的值

import torch

a = torch.randn(4, 6)

print('a.shape = ', a.shape)
print('a = ', a)

a_kthvalue = a.kthvalue(3, dim=1)
print('\na_kthvalue = ', a_kthvalue)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

打印结果:

a.shape =  torch.Size([4, 6])
a =  tensor([[-0.6937, -2.6252, -0.0817,  0.1022, -0.4947, -0.0497],
    	    [ 1.5205,  0.1827, -1.1397,  2.1021,  1.0715,  1.2167],
    	    [-1.6002,  1.1374, -1.2099,  2.4795, -0.6292, -1.1339],
    	    [ 0.9659, -0.1593, -0.0963,  0.8514, -1.0020,  0.9966]])

a_kthvalue =  torch.return_types.kthvalue(
								values=tensor([-0.4947,  1.0715, -1.1339, -0.0963]),
								indices=tensor([4, 4, 5, 2])
								)

Process finished with exit code 0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

五、compare:>、>=、<、<=、!=、==

在这里插入图片描述

1、>

import torch

a = torch.randn(4, 6)

print('a.shape = ', a.shape)
print('a = ', a)

a_0 = a > 0
print('\na_0 = ', a_0)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

打印结果:

a.shape =  torch.Size([4, 6])
a =  tensor([[-1.3893,  1.4089, -0.1790, -0.4641, -2.9107, -1.2560],
      		  [ 0.1702,  0.3689, -1.4860,  0.5642,  0.8496,  0.4877],
    		    [ 0.7780, -0.5726,  1.0474,  0.7246,  0.2890, -1.1601],
    		    [ 1.2610,  0.1574, -0.7690,  0.7012,  0.2023,  0.8884]])

a_0 =  tensor([[False,  True, False, False, False, False],
      		  [ True,  True, False,  True,  True,  True],
      		  [ True, False,  True,  True,  True, False],
     		   [ True,  True, False,  True,  True,  True]])

Process finished with exit code 0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

2、!=

import torch

a = torch.randn(4, 6)

print('a.shape = ', a.shape)
print('a = ', a)

a_0 = a != 0
print('\na_0 = ', a_0)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

打印结果:

a.shape =  torch.Size([4, 6])
a =  tensor([[ 0.0204,  0.3504,  1.2778, -0.8016, -2.2371, -2.1239],
   		     [-0.3999,  2.6354,  3.4434,  0.0612, -0.7063, -0.2649],
    		    [ 0.1774, -0.1433, -1.0806, -2.4606, -0.4880,  0.4409],
    		    [-1.2463,  0.8048,  0.9639, -0.1631,  0.4157, -0.1088]])

a_0 =  tensor([[True, True, True, True, True, True],
    		    [True, True, True, True, True, True],
     		   [True, True, True, True, True, True],
     		   [True, True, True, True, True, True]])

Process finished with exit code 0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

3、==

import torch

a = torch.ones(4, 6)
b = torch.randn(4, 6)

a_b_eq01 = torch.eq(a, b)
a_b_eq02 = a == b

print('a_b_eq01 = ', a_b_eq01)
print('a_b_eq02 = ', a_b_eq02)

a_a_eq01 = torch.eq(a, a)
a_a_eq02 = a == a

print('\na_a_eq01 = ', a_a_eq01)
print('a_a_eq02 = ', a_a_eq02)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

打印结果:

a_b_eq01 =  tensor([[False, False, False, False, False, False],
        [False, False, False, False, False, False],
        [False, False, False, False, False, False],
        [False, False, False, False, False, False]])
a_b_eq02 =  tensor([[False, False, False, False, False, False],
        [False, False, False, False, False, False],
        [False, False, False, False, False, False],
        [False, False, False, False, False, False]])

a_a_eq01 =  tensor([[True, True, True, True, True, True],
        [True, True, True, True, True, True],
        [True, True, True, True, True, True],
        [True, True, True, True, True, True]])
a_a_eq02 =  tensor([[True, True, True, True, True, True],
        [True, True, True, True, True, True],
        [True, True, True, True, True, True],
        [True, True, True, True, True, True]])

Process finished with exit code 0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19



参考资料:
tensor的统计属性

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/606300
推荐阅读
相关标签
  

闽ICP备14008679号