赞
踩
- x = torch.rand(4, 3)
- x = torch.zeros(4, 3, dtype=torch.long)
- x = torch.tensor([5.5, 3])
- x = x.new_ones(4, 3, dtype=torch.double) # 创建一个新的tensor,返回的tensor默认具有相同的 torch.dtype和torch.device
- # 也可以像之前的写法 x = torch.ones(4, 3, dtype=torch.double)
- print(x)
- x = torch.randn_like(x, dtype=torch.float)
- # 重置数据类型
- print(x)
- # 结果会有一样的size
- #获取它的维度信息:
- print(x.size())
- print(x.shape)
- return x.shape(1)#取第几个维度
- #改变大小:如果你想改变一个 tensor 的大小或者形状,你可以使用 torch.view:
- x = torch.randn(4, 4)
- y = x.view(16)
- z = x.view(-1, 8) # -1是指这一维的维数由其他维度决定
- print(x.size(), y.size(), z.size())
- #注意 view() 返回的新tensor与源tensor共享内存(其实是同一个tensor),也即更改其中的一个,另 外一#个也会跟着改变。(顾名思义,view仅仅是改变了对这个张量的观察⻆度)
- tensor的参数
- Keyword args:
- dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
- Default: if ``None``, infers data type from :attr:`data`.
- device (:class:`torch.device`, optional): the desired device of returned tensor.
- Default: if ``None``, uses the current device for the default tensor type
- (see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
- for CPU tensor types and the current CUDA device for CUDA tensor types.
- requires_grad (bool, optional): If autograd should record operations on the
- returned tensor. Default: ``False``.
- pin_memory (bool, optional): If set, returned tensor would be allocated in
- the pinned memory. Works only for CPU tensors. Default: ``False``.
grad在反向传播过程中是累加的(accumulated),这意味着每一次运行反向传播,梯度都会累加之前的梯度,所以一般在反向传播之前需把梯度清零。
每次backward()都会累加梯度。
并行:
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。