赞
踩
全连接网络很好理解画出来就是:输入经过隐层输出一个h
此时的表达为:
h
=
t
a
n
h
(
U
(
x
)
)
h=tanh(U(x))
h=tanh(U(x))
其中,h为输出,U为隐层的函数,x为输入。
RNN从这个角度可以当成多个全连接网络并排放一块,此时的他们没有任何关系,但是很多时候输入的序列是有前后联系的。
为了增加他们之间的联系,比如我认出了前一个字是"我",那很容易联想到下一个词是"菜"。这个操作很容易让人想到一个思路就是把上一个识别的结果输入到下一个识别内容里,辅助我们的网络进行识别。这样的操作有点类似输入法的输入提示,于是我们就有了RNN的结构:
直接把识别结果输入到下一个里实属粗暴,所以就加上了可学习参数,一块输出作为结果。
h
=
t
a
n
h
(
U
(
x
)
+
W
(
h
0
)
)
h=tanh(U(x)+W(h0))
h=tanh(U(x)+W(h0))
其中,relu为激活函数,U是用于魔改识别结果的函数,W是隐层函数,x是输入。图个省事,我们用全连接来实现这两个函数也就是
U
(
x
)
=
W
u
x
+
b
u
U(x)=W_ux+b_u
U(x)=Wux+bu
W ( h ) = W w h + b w W(h)=W_{w}h+b_w W(h)=Wwh+bw
和在一块这也就是torch官方文档给出的表示了
h
t
=
t
a
n
h
(
W
u
x
t
+
b
u
+
W
w
h
t
−
1
+
b
w
)
h_t=tanh(W_ux_t+b_u+W_wh_{t-1}+b_w)
ht=tanh(Wuxt+bu+Wwht−1+bw)
那么下面就可以写代码了。首先我们手头有一个序列,每个序列抽一帧出来都是一个向量也就是x,这个x有多大得先明确一下,其次RNN隐藏层的网络有多大你也得有个概念吧。在此基础上别的就都靠感觉了
import torch import torch.nn as nn import torch.nn.functional as F class RNN(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.hidden_size = hidden_size self.fc_u = nn.Linear(input_size, hidden_size) self.fc_w = nn.Linear(hidden_size, hidden_size) def forward(self, x): batch_size, seq_len, _ = x.shape output = torch.rand(batch_size, seq_len, self.hidden_size) for i in range(x.shape[1]): output[:, i, :] = F.relu(self.fc_u(x[:, i, :]) + self.fc_w(output[:, i-1, :])) return output
在理解了上面这个流程之后新的问题产生了,现在让我填新的空"我__菜鸡",按照前面的思路这里难道又要填"菜鸡"了吗,显然不应该啊,于是我们就要带着上下文进行理解。由此双向RNN应运而生。
在上面的单向RNN基础上,我们再从后往前推理一遍,两个结果和在一块不就可以了吗,所以双向RNN表示为
h
1
=
t
a
n
h
(
U
(
x
1
)
+
W
(
h
0
)
+
U
′
(
x
1
)
+
W
′
(
h
2
)
)
h_1=tanh(U(x_1)+W(h_0)+U^{'}(x_1)+W^{'}(h_2))
h1=tanh(U(x1)+W(h0)+U′(x1)+W′(h2))
代码也很简单,就是在上一个RNN的基础上又在外面包了一层。
class RNNbase(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.hidden_size = hidden_size self.fc_u = nn.Linear(input_size, hidden_size) self.fc_w = nn.Linear(hidden_size, hidden_size) def forward(self, x): batch_size, seq_len, _ = x.shape output = torch.rand(batch_size, seq_len, self.hidden_size) for i in range(x.shape[1]): output[:, i, :] = self.fc_u(x[:, i, :]) + self.fc_w(output[:, i-1, :]) return output class RNN(nn.Module): def __init__(self, input_size, hidden_size, bidirectional=False): super().__init__() self.bidirectional = bidirectional self.rnn_forward = RNNbase(input_size, hidden_size) if bidirectional: self.rnn_backward = RNNbase(input_size, hidden_size) def forward(self, x): x = self.rnn_forward.forward(x) if self.bidirectional: x += self.rnn_backward.forward(x.flip(-1)).flip(-1) return F.relu(x)
按理来说到这里就足够处理序列了,但是为什么还要继续升级呢?因为我不仅需要知道"我是菜鸡",还要能说出"我怎么可能不是菜鸡"这种充满文学气息的长句子。而RNN由于众所周的知梯度消失,在序列中,词与词之间的关联非常有限,为了让远一点的地方也能关联上,不至于稍微隔了点距离就完全无关,于是我们就有了lstm。
lstm有一条贯穿前后的线(cell state),这条线就是用来存前面信息的,上文信息全部融合在其中。如果说RNN的 h t − 1 h_{t-1} ht−1 是上代家主人没了,留了个摊子给你,是好是坏都得接着,那这个 C t − 1 C_{t-1} Ct−1 就应该是祖上的传承。
接着就一个一个的来理解一下各个门。
首先是遗忘门(forget gate layer)。输入为上一代的结果( h t − 1 h_{t-1} ht−1),和当前实际情况( x t x_t xt),经过深思熟虑( W f W_f Wf),你感觉有部分要求已经跟不上时代,于是决定对80%程度上不违背祖宗,这就是遗忘门。通过一个全连接网络加sigmoid函数输出为[0, 1]的值作为输出门的结果,这个结果作用于cell state,也就降低了前序信息的权重。
然后是输入门(input gate layer)。每一代继承人总想在自己任期内整点不一样的,顺便再写进党章,这是非常正常的事情。你思来想去( W i W_i Wi)最后决定把自己top10%的先进思想写进去给后人看。类似遗忘门的构成,sigmoid激活函数+全连接层的组合,最后输出一个0到1的值。
打定主意就开始写,找了好些你觉得比较重要的点又提炼提炼( W c W_c Wc)总结成了xxx思想( C ~ t \widetilde{C}_t C t)
删删改改之前的文章( f t ∗ C t − 1 f_t*C_{t-1} ft∗Ct−1),加上了自己精心选出来的新内涵( i t ∗ C ~ t i_t*\widetilde{C}_t it∗C t),一部《xxx——第t版修订》( C t C_t Ct)就出来了
最后是输出门(output gate layer)。任期满了要交接了,这个时候做个自己工作的总结吧,之前把我最光辉的思想都写进xx里了,这肯定要借鉴啊,但是有很多是之前人的东西全说是我的成果这有点不好,看看上一任咋写的( h t − 1 h_{t-1} ht−1),再看看实际情况( x t x_t xt),谦虚一点谨慎一点( W o W_o Wo),就说99%( o t o_t ot)是我的结果吧。当然交之前得改改措辞啥的,有些地方得突出一下,有些地方得修饰一下( t a n ( C t ) tan(C_t) tan(Ct)),这样就可以交了( h t h_t ht)
整明白了整个操作流程,再明确一下各个式子
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-lBOqrX5O-1681271089862)(/Users/zhouchenye/Library/Application Support/typora-user-images/image-20221214201531349.png)]
代码:
class ForgetGateLayer(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.fc_u = nn.Linear(input_size, 1) self.fc_w = nn.Linear(hidden_size, 1) def forward(self, x, h): return F.sigmoid(self.fc_u(x) + self.fc_w(h)) class InputGateLayer(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.fc_i_u = nn.Linear(input_size, 1) self.fc_i_w = nn.Linear(hidden_size, 1) self.fc_g_u = nn.Linear(input_size, hidden_size) self.fc_g_w = nn.Linear(hidden_size, hidden_size) def forward(self, x, h): i_t = F.sigmoid(self.fc_i_u(x) + self.fc_i_w(h)) g_t = F.tanh(self.fc_g_u(x) + self.fc_g_w(h)) return i_t, g_t class OutputGateLayer(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.fc_u = nn.Linear(input_size, 1) self.fc_w = nn.Linear(hidden_size, 1) def forward(self, x, h): return F.sigmoid(self.fc_u(x) + self.fc_w(h)) class LSTMBlock(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.forget_gate_layer = ForgetGateLayer(input_size, hidden_size) self.input_gate_layer = InputGateLayer(input_size, hidden_size) self.output_gate_layer = OutputGateLayer(input_size, hidden_size) def forward(self, x, h, c): f_t = self.forget_gate_layer.forward(x, h) i_t, g_t = self.input_gate_layer.forward(x, h) c_t = f_t * c + i_t * g_t o_t = self.output_gate_layer(x, h) h_t = o_t * F.tanh(c_t) return c_t, h_t class LSTMbase(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.hidden_size = hidden_size self.lstm_block = LSTMBlock(input_size, hidden_size) def forward(self, x): print(x.shape) batch_size, seq_len, _ = x.shape output = torch.zeros(batch_size, seq_len, self.hidden_size) c_t = torch.zeros(batch_size, self.hidden_size) for i in range(seq_len): c_t, h_t = self.lstm_block.forward(x[:, i, :], output[:, i-1, :], c_t) output[:, i, :] = h_t return output class LSTM(nn.Module): def __init__(self, input_size, hidden_size, bidirectional=False): super().__init__() self.bidirectional = bidirectional self.rnn_forward = LSTMbase(input_size, hidden_size) if bidirectional: self.rnn_backward = LSTMbase(input_size, hidden_size) def forward(self, x): if self.bidirectional: x = self.rnn_forward.forward(x) + self.rnn_backward.forward(x.flip(-1)).flip(-1) else: x = self.rnn_forward.forward(x) return F.relu(x) model = LSTM(32, 256, bidirectional=True) test_data = torch.rand(5, 10, 32) test_result = model.forward(test_data) test_result.shape
CRNN里的特征提取网络比较简单,类似VGG网络,唯二不同的是原文中有两个池化层从2x2变成了2x1,因为大部分文字识别的图片都是长条形状的。贴一个网络结构,下面写代码。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-O1829TwS-1681271089862)(/Users/zhouchenye/Library/Application Support/typora-user-images/image-20221215133648107.png)]
唯一和模型结构不同的是两个(1x2)的池化步长为2,而在代码中参考了一些别人的代码用的步长为1。
class ForgetGateLayer(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.fc_u = nn.Linear(input_size, 1) self.fc_w = nn.Linear(hidden_size, 1) def forward(self, x, h): return F.sigmoid(self.fc_u(x) + self.fc_w(h)) class InputGateLayer(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.fc_i_u = nn.Linear(input_size, 1) self.fc_i_w = nn.Linear(hidden_size, 1) self.fc_g_u = nn.Linear(input_size, hidden_size) self.fc_g_w = nn.Linear(hidden_size, hidden_size) def forward(self, x, h): i_t = F.sigmoid(self.fc_i_u(x) + self.fc_i_w(h)) g_t = F.tanh(self.fc_g_u(x) + self.fc_g_w(h)) return i_t, g_t class OutputGateLayer(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.fc_u = nn.Linear(input_size, 1) self.fc_w = nn.Linear(hidden_size, 1) def forward(self, x, h): return F.sigmoid(self.fc_u(x) + self.fc_w(h)) class LSTMBlock(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.forget_gate_layer = ForgetGateLayer(input_size, hidden_size) self.input_gate_layer = InputGateLayer(input_size, hidden_size) self.output_gate_layer = OutputGateLayer(input_size, hidden_size) def forward(self, x, h, c): f_t = self.forget_gate_layer.forward(x, h) i_t, g_t = self.input_gate_layer.forward(x, h) c_t = f_t * c + i_t * g_t o_t = self.output_gate_layer(x, h) h_t = o_t * F.tanh(c_t) return c_t, h_t class LSTMbase(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.hidden_size = hidden_size self.lstm_block = LSTMBlock(input_size, hidden_size) def forward(self, x): batch_size, seq_len, _ = x.shape output = torch.zeros(batch_size, seq_len, self.hidden_size) c_t = torch.zeros(batch_size, self.hidden_size) for i in range(seq_len): c_t, h_t = self.lstm_block.forward(x[:, i, :], output[:, i-1, :], c_t) output[:, i, :] = h_t return output class LSTM(nn.Module): def __init__(self, input_size, hidden_size, bidirectional=False): super().__init__() self.bidirectional = bidirectional self.rnn_forward = LSTMbase(input_size, hidden_size) if bidirectional: self.rnn_backward = LSTMbase(input_size, hidden_size) def forward(self, x): if self.bidirectional: x = self.rnn_forward.forward(x) + self.rnn_backward.forward(x.flip(-1)).flip(-1) else: x = self.rnn_forward.forward(x) return F.relu(x) model = LSTM(32, 256, bidirectional=True) test_data = torch.rand(5, 10, 32) test_result = model.forward(test_data) test_result.shape
class CRNN(nn.Module): def __init__(self, output_class, img_size=(1, 32, 100), map_to_seq_hidden=64, rnn_hidden=256): super().__init__() channel, height, width = img_size self.cnn_output_channel, self.cnn_output_height, self.cnn_output_width = 512, height // 16 - 1, width // 4 - 1 self.cnn = CNN(img_size) self.map_to_seq = nn.Linear(self.cnn_output_channel * self.cnn_output_height, map_to_seq_hidden) self.rnn1 = LSTM(map_to_seq_hidden, rnn_hidden, bidirectional=True) self.rnn2 = LSTM(rnn_hidden, rnn_hidden, bidirectional=True) self.fc = nn.Linear(rnn_hidden, output_class) def forward(self, x): batch_size, *_ = x.shape x = self.cnn(x) x = x.view(batch_size, self.cnn_output_channel * self.cnn_output_height, -1) x = x.permute(0, 2, 1) x = self.map_to_seq(x) x = self.rnn1(x) x = self.rnn2(x) x = self.fc(x) return x model = CRNN(2) test_data = torch.rand(5, 1, 32, 71) model.forward(test_data).shape
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。