赞
踩
\quad\quad 今天早上开始在Andrew Ng老师的课程,course第五课第一周作业及一些书籍上学习了RNN的一些基本概念和工作原理,现利用闲暇时间将其记录起来,方便日后查阅相关的知识点。
一般对于诸如语音、视频、文本等序列型数据来说,传统的神经网络并不能够很好的处理它们,这是因为:
这里,我们给出RNN模型在实际生活当中的一些应用场景:
\quad\quad
我们前面提到,RNN相对于传统的神经网络来说可以很好的处理序列数据,它的原理是在每一个时间步,RNN会计算当前时间步的激活值,将其循环传递到下一个时间步当中去。更一般的说,这是由于它拥有“记忆”,可以一次读取一个输入(例如一个单词),并通过隐藏层激活去记住一些信息,这些激活值从一个时间步传递到下一个时间步。
\quad\quad
RNN整体结构上来说,可以分为单向(Uni-directional)RNN和双向(Bidirectional)RNN,单向RNN从过去获取信息从而来处理后面的信息,而双向RNN可以从过去和未来分别获取上下文(即我们做阅读理解时候常说的要根据上下文的联系去做题,这样阅读才能拿高分),当然这样也有一个缺点,就是我们需要读入完整个输入它才可以去双向运行,不过还是有解决的办法的,这里就不阐述了~~
RNN的模块常用的有三种,一种是标准型,一种是门控循环单元(GRU),另外一种便是我们日常经常所见到的长短期记忆(LSTM).
导入后续代码需要用到的方法:
import numpy as np
from rnn_utils import *
def softmax(x):
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=0)
def sigmoid(x):
return 1 / (1 + np.exp(-x))
我们以单个RNN-cell为例进行讲解,其他的类推,只需一个循环即可搞定:
where:
(1)参数说明
(2)计算步骤
# GRADED FUNCTION: rnn_cell_forward def rnn_cell_forward(xt, a_prev, parameters): """ Implements a single forward step of the RNN-cell as described in Figure (2) Arguments: xt -- your input data at timestep "t", numpy array of shape (n_x, m). a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m) parameters -- python dictionary containing: Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x) Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a) Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a) ba -- Bias, numpy array of shape (n_a, 1) by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1) Returns: a_next -- next hidden state, of shape (n_a, m) yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m) cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters) """ # Retrieve parameters from "parameters" Wax = parameters["Wax"] Waa = parameters["Waa"] Wya = parameters["Wya"] ba = parameters["ba"] by = parameters["by"] # Compute the hidden state with tanh activation and using your new hidden state a_next compute the prediction yt_pred a_next = np.tanh(np.dot(Wax, xt) + np.dot(Waa, a_prev) + ba) yt_pred = softmax(np.dot(Wya, a_next) + by) # store values you need for backward propagation in cache cache = (a_next, a_prev, xt, parameters) return a_next, yt_pred, cache
# Testing your function np.random.seed(1) xt = np.random.randn(3, 10) a_prev = np.random.randn(5, 10) Waa = np.random.randn(5, 5) Wax = np.random.randn(5, 3) Wya = np.random.randn(2, 5) ba = np.random.randn(5, 1) by = np.random.randn(2, 1) parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by} a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters) print("a_next[4] = ", a_next[4]) print("a_next.shape = ", a_next.shape) print("yt_pred[1] =", yt_pred[1]) print("yt_pred.shape = ", yt_pred.shape)
# GRADED FUNCTION: rnn_forward def rnn_forward(x, a0, parameters): """ Implement the forward propagation of the recurrent neural network. Arguments: x -- Input data for every time-step, of shape (n_x, m, T_x). a0 -- Initial hidden state, of shape (n_a, m) parameters -- python dictionary containing: Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a) Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x) Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a) ba -- Bias numpy array of shape (n_a, 1) by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1) Returns: a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x) y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x) caches -- tuple of values needed for the backward pass, contains (list of caches, x) """ # Initialize "caches" which contain the list of all cache caches = [] # Retrieve dimensions from shapes of x and parameters["Wya"] n_x, m, T_x = x.shape n_y, n_a = parameters["Wya"].shape # initialize "a" and "y" with zeros a = np.zeros((n_a, m, T_x)) y_pred = np.zeros((n_y, m, T_x)) # Initialize a_next a_next = a0 # loop over all time-steps for t in range(T_x): # Update next hidden state, compute the prediction, get the cache a_next, yt_pred, cache = rnn_cell_forward(x[:, :, t], a_next, parameters) # Save the value of the new "next" hidden state in a a[:, :, t] = a_next # Save the value of the prediction in y y_pred[:, :, t] = yt_pred # Append "cache" to caches caches.append(cache) # store values needed for backward propagation in cache caches = (caches, x) return a, y_pred, caches
np.random.seed(1) x = np.random.randn(3,10,4) a0 = np.random.randn(5,10) Waa = np.random.randn(5,5) Wax = np.random.randn(5,3) Wya = np.random.randn(2,5) ba = np.random.randn(5,1) by = np.random.randn(2,1) parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by} a, y_pred, caches = rnn_forward(x, a0, parameters) print("a[4][1] = ", a[4][1]) print("a.shape = ", a.shape) print("y_pred[1][3] =", y_pred[1][3]) print("y_pred.shape = ", y_pred.shape) print("caches[1][1][3] =", caches[1][1][3]) print("len(caches) = ", len(caches))
这里有三个门:遗忘门、更新门、输出门
想象一下,假设我们正在阅读一段文字中的单词,并希望用LSTM去跟踪语法结构,比如对于下面的来那个句话(这里***表示省略,主要是强调要根据前文的主语去判断后面的谓语是要用单数还是复数):
对于这种情况,我们就需要网络去有选择性的选择记忆/遗忘,比如car和cars就是我们需要记忆的东西,其他一些则可以省略,这也正是网络需要学习的,在这里我们用gamma(t)的取值来进行操作,当其值为0或接近0时,那么意味着这个信息没啥用,我们需要删除相应的部分;相应地,如果为1时,我们则将它保留下来,具体公式见上图,这里的函数为sigmoid函数。
这个门的作用可以当成是一种补充,比如我们前面记忆了某种信息,到后面突然忘记了,就可以借助更新们来帮助我们进一步的判断,以便更新最新的信息,其它部分同上所述。
这个就好理解了,它的作用就是输出我们经过前面那些操作之后,最终需要输出的信息。
# GRADED FUNCTION: lstm_cell_forward def lstm_cell_forward(xt, a_prev, c_prev, parameters): """ Implement a single forward step of the LSTM-cell Arguments: xt -- your input data at timestep "t", numpy array of shape (n_x, m). a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m) c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m) parameters -- python dictionary containing: Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x) bf -- Bias of the forget gate, numpy array of shape (n_a, 1) Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x) bi -- Bias of the update gate, numpy array of shape (n_a, 1) Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x) bc -- Bias of the first "tanh", numpy array of shape (n_a, 1) Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x) bo -- Bias of the output gate, numpy array of shape (n_a, 1) Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a) by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1) Returns: a_next -- next hidden state, of shape (n_a, m) c_next -- next memory state, of shape (n_a, m) yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m) cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters) Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde), c stands for the memory value """ # Retrieve parameters from "parameters" Wf = parameters["Wf"] bf = parameters["bf"] Wi = parameters["Wi"] bi = parameters["bi"] Wc = parameters["Wc"] bc = parameters["bc"] Wo = parameters["Wo"] bo = parameters["bo"] Wy = parameters["Wy"] by = parameters["by"] # Retrieve dimensions from shapes of xt and Wy n_x, m = xt.shape n_y, n_a = Wy.shape # Concatenate a_prev and xt concat = np.zeros((n_x + n_a, m)) concat[:n_a, :] = a_prev concat[n_a:, :] = xt # Compute values for ft, it, cct, c_next, ot, a_next. ft = sigmoid(np.dot(Wf, concat) + bf) it = sigmoid(np.dot(Wi, concat) + bi) cct = np.tanh(np.dot(Wc, concat) + bc) c_next = ft*c_prev + it*cct ot = sigmoid(np.dot(Wo, concat) + bo) a_next = ot*np.tanh(c_next) # Compute prediction of the LSTM cell yt_pred = softmax(np.dot(Wy, a_next) + by) # store values needed for backward propagation in cache cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) return a_next, c_next, yt_pred, cache
np.random.seed(1) xt = np.random.randn(3,10) a_prev = np.random.randn(5,10) c_prev = np.random.randn(5,10) Wf = np.random.randn(5, 5+3) bf = np.random.randn(5,1) Wi = np.random.randn(5, 5+3) bi = np.random.randn(5,1) Wo = np.random.randn(5, 5+3) bo = np.random.randn(5,1) Wc = np.random.randn(5, 5+3) bc = np.random.randn(5,1) Wy = np.random.randn(2,5) by = np.random.randn(2,1) parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by} a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters) print("a_next[4] = ", a_next[4]) print("a_next.shape = ", c_next.shape) print("c_next[2] = ", c_next[2]) print("c_next.shape = ", c_next.shape) print("yt[1] =", yt[1]) print("yt.shape = ", yt.shape) print("cache[1][3] =", cache[1][3]) print("len(cache) = ", len(cache))
# GRADED FUNCTION: lstm_forward def lstm_forward(x, a0, parameters): """ Implement the forward propagation of the recurrent neural network Arguments: x -- Input data for every time-step, of shape (n_x, m, T_x). a0 -- Initial hidden state, of shape (n_a, m) parameters -- python dictionary containing: Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x) bf -- Bias of the forget gate, numpy array of shape (n_a, 1) Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x) bi -- Bias of the update gate, numpy array of shape (n_a, 1) Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x) bc -- Bias of the first "tanh", numpy array of shape (n_a, 1) Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x) bo -- Bias of the output gate, numpy array of shape (n_a, 1) Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a) by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1) Returns: a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x) y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x) caches -- tuple of values needed for the backward pass, contains (list of all the caches, x) """ # Initialize "caches", which will track the list of all the caches caches = [] # Retrieve dimensions from shapes of x and parameters['Wy'] (≈2 lines) n_x, m, T_x = x.shape n_y, n_a = parameters['Wy'].shape # initialize "a", "c" and "y" with zeros (≈3 lines) a = np.zeros((n_a, m, T_x)) c = np.zeros((n_a, m, T_x)) y = np.zeros((n_y, m, T_x)) # Initialize a_next and c_next (≈2 lines) a_next = a0 c_next = np.zeros((n_a, m)) # loop over all time-steps for t in range(T_x): # Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line) a_next, c_next, yt, cache = lstm_cell_forward(x[:, :, t], a_next, c_next, parameters) # Save the value of the new "next" hidden state in a (≈1 line) a[:,:,t] = a_next # Save the value of the prediction in y (≈1 line) y[:,:,t] = yt # Save the value of the next cell state (≈1 line) c[:,:,t] = c_next # Append the cache into caches (≈1 line) caches.append(cache) # store values needed for backward propagation in cache caches = (caches, x) return a, y, c, caches
np.random.seed(1) x = np.random.randn(3,10,7) a0 = np.random.randn(5,10) Wf = np.random.randn(5, 5+3) bf = np.random.randn(5,1) Wi = np.random.randn(5, 5+3) bi = np.random.randn(5,1) Wo = np.random.randn(5, 5+3) bo = np.random.randn(5,1) Wc = np.random.randn(5, 5+3) bc = np.random.randn(5,1) Wy = np.random.randn(2,5) by = np.random.randn(2,1) parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by} a, y, c, caches = lstm_forward(x, a0, parameters) print("a[4][3][6] = ", a[4][3][6]) print("a.shape = ", a.shape) print("y[1][4][3] =", y[1][4][3]) print("y.shape = ", y.shape) print("caches[1][1[1]] =", caches[1][1][1]) print("c[1][2][1]", c[1][2][1]) print("len(caches) = ", len(caches))
反向传播最麻烦的还是要求梯度,所以我们一般只需要写前向传播就行了,反向传播就直接调包好了,不过理解一下整个流程还是很有帮助的,这里以标准型的网络为例进行说明。
这是它的流程图,当然原理还是一样的,通过链式法则去计算每个参数的偏导数,从后往前一步步实现权重的更新,注意我们在前面已经保存了很多的结果存放在列表当中,这样我们就可以避免在反向传播过程中取进行重复的计算,从而降低了运行时间。
这里tanh(u)对u求导是等于(1-(tanh(u))^2)*du,相当于复合函数求导,其他照搬上面的求导公式
def rnn_cell_backward(da_next, cache): """ Implements the backward pass for the RNN-cell (single time-step). Arguments: da_next -- Gradient of loss with respect to next hidden state cache -- python dictionary containing useful values (output of rnn_cell_forward()) Returns: gradients -- python dictionary containing: dx -- Gradients of input data, of shape (n_x, m) da_prev -- Gradients of previous hidden state, of shape (n_a, m) dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x) dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a) dba -- Gradients of bias vector, of shape (n_a, 1) """ # Retrieve values from cache (a_next, a_prev, xt, parameters) = cache # Retrieve values from parameters Wax = parameters["Wax"] Waa = parameters["Waa"] Wya = parameters["Wya"] ba = parameters["ba"] by = parameters["by"] # dtanh(u) = (1- tanh(u)**2)*du dtanh = (1-a_next**2)*da_next # compute the gradient of the loss with respect to Wax dxt = np.dot(Wax.T, dtanh) dWax = np.dot(dtanh, xt.T) # compute the gradient with respect to Waa da_prev = np.dot(Waa.T, dtanh) dWaa = np.dot(dtanh, a_prev.T) # compute the gradient with respect to b dba = np.sum(dtanh, keepdims=True, axis=-1) # Store the gradients in a python dictionary gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba} return gradients
# Testing your function np.random.seed(1) xt = np.random.randn(3,10) a_prev = np.random.randn(5,10) Wax = np.random.randn(5,3) Waa = np.random.randn(5,5) Wya = np.random.randn(2,5) b = np.random.randn(5,1) by = np.random.randn(2,1) parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by} a_next, yt, cache = rnn_cell_forward(xt, a_prev, parameters) da_next = np.random.randn(5,10) gradients = rnn_cell_backward(da_next, cache) print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2]) print("gradients[\"dxt\"].shape =", gradients["dxt"].shape) print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3]) print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape) print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1]) print("gradients[\"dWax\"].shape =", gradients["dWax"].shape) print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2]) print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape) print("gradients[\"dba\"][4] =", gradients["dba"][4]) print("gradients[\"dba\"].shape =", gradients["dba"].shape)
def rnn_backward(da, caches): """ Implement the backward pass for a RNN over an entire sequence of input data. Arguments: da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x) caches -- tuple containing information from the forward pass (rnn_forward) Returns: gradients -- python dictionary containing: dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x) da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m) dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x) dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a) dba -- Gradient w.r.t the bias, of shape (n_a, 1) """ # Retrieve values from the first cache (t=1) of caches (caches, x) = caches (a1, a0, x1, parameters) = caches[0] # Retrieve dimensions from da's and x1's shapes n_a, m, T_x = da.shape n_x, m = x1.shape # initialize the gradients with the right sizes dx = np.zeros((n_x, m, T_x)) dWax = np.zeros((n_a, n_x)) dWaa = np.zeros((n_a, n_a)) dba = np.zeros((n_a, 1)) da0 = np.zeros((n_a, m)) da_prevt = np.zeros((n_a, m)) # Loop through all the time steps for t in reversed(range(T_x)): # Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. gradients = rnn_cell_backward(da[:, :, t] + da_prevt, caches[t]) # Retrieve derivatives from gradients (≈ 1 line) dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"] # Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines) dx[:, :, t] = dxt dWax += dWaxt dWaa += dWaat dba += dbat # Set da0 to the gradient of a which has been backpropagated through all time-steps da0 = da_prevt # Store the gradients in a python dictionary gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba} return gradients
# Testing np.random.seed(1) x = np.random.randn(3,10,4) a0 = np.random.randn(5,10) Wax = np.random.randn(5,3) Waa = np.random.randn(5,5) Wya = np.random.randn(2,5) ba = np.random.randn(5,1) by = np.random.randn(2,1) parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by} a, y, caches = rnn_forward(x, a0, parameters) da = np.random.randn(5, 10, 4) gradients = rnn_backward(da, caches) print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2]) print("gradients[\"dx\"].shape =", gradients["dx"].shape) print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3]) print("gradients[\"da0\"].shape =", gradients["da0"].shape) print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1]) print("gradients[\"dWax\"].shape =", gradients["dWax"].shape) print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2]) print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape) print("gradients[\"dba\"][4] =", gradients["dba"][4]) print("gradients[\"dba\"].shape =", gradients["dba"].shape)
好了,到这里就差不多了,下面再放下LSTM单元的求导公式,有兴趣的可以自编程实现。
完毕。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。