赞
踩
- # 定义LSTM的参数含义: (input_size, hidden_size, num_layers)
- # 定义输入张量的参数含义: (sequence_length, batch_size, input_size)
- # 定义隐藏层初始张量和细胞初始状态张量的参数含义:
- # (num_layers * num_directions, batch_size, hidden_size)
-
- >>> import torch.nn as nn
- >>> import torch
- >>> rnn = nn.LSTM(5, 6, 2)
- >>> input = torch.randn(1, 3, 5)
- >>> h0 = torch.randn(2, 3, 6)
- >>> c0 = torch.randn(2, 3, 6)
- >>> output, (hn, cn) = rnn(input, (h0, c0))
- >>> output
- tensor([[[ 0.0447, -0.0335, 0.1454, 0.0438, 0.0865, 0.0416],
- [ 0.0105, 0.1923, 0.5507, -0.1742, 0.1569, -0.0548],
- [-0.1186, 0.1835, -0.0022, -0.1388, -0.0877, -0.4007]]],
- grad_fn=<StackBackward>)
- >>> hn
- tensor([[[ 0.4647, -0.2364, 0.0645, -0.3996, -0.0500, -0.0152],
- [ 0.3852, 0.0704, 0.2103, -0.2524, 0.0243, 0.0477],
- [ 0.2571, 0.0608, 0.2322, 0.1815, -0.0513, -0.0291]],
-
- [[ 0.0447, -0.0335, 0.1454, 0.0438, 0.0865, 0.0416],
- [ 0.0105, 0.1923, 0.5507, -0.1742, 0.1569, -0.0548],
- [-0.1186, 0.1835, -0.0022, -0.1388, -0.0877, -0.4007]]],
- grad_fn=<StackBackward>)
- >>> cn
- tensor([[[ 0.8083, -0.5500, 0.1009, -0.5806, -0.0668, -0.1161],
- [ 0.7438, 0.0957, 0.5509, -0.7725, 0.0824, 0.0626],
- [ 0.3131, 0.0920, 0.8359, 0.9187, -0.4826, -0.0717]],
-
- [[ 0.1240, -0.0526, 0.3035, 0.1099, 0.5915, 0.0828],
- [ 0.0203, 0.8367, 0.9832, -0.4454, 0.3917, -0.1983],
- [-0.2976, 0.7764, -0.0074, -0.1965, -0.1343, -0.6683]]],
- grad_fn=<StackBackward>)

遗忘门结构分析:
输入门结构分析:
细胞状态更新分析:
输出门结构分析:
什么是Bi-LSTM ?
Pytorch中LSTM工具的使用:
LSTM优势:
LSTM缺点:
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。