当前位置:   article > 正文

【LSTM】基于LSTM网络的人脸识别算法的MATLAB仿真_bi-lstm 人脸识别

bi-lstm 人脸识别

1.软件版本

matlab2021a

2.本算法理论知识

    长短时记忆模型LSTM是由Hochreiter等人在1997年首次提出的,其主要原理是通过一种特殊的神经元结构用来长时间存储信息。LSTM网络模型的基本结构如下图所示:

图1 LSTM网络的基本结构

    从图1的结构图可知,LSMT网络结构包括输入层,记忆模块以及输出层三个部分,其中记忆模块由输入门(Input Gate)、遗忘门(Forget Gate)以及输出门(Output Gate)。LSTM模型通过这三个控制门来控制神经网络中所有的神经元的读写操作。

    LSTM模型的基本原理是通过多个控制门来抑制RNN神经网络梯度消失的缺陷。通过LSTM模型可以在较长的时间内保存梯度信息,延长信号的处理时间,因此LSTM模型适合处理各种频率大小的信号以及高低频混合信号。LSTM模型中的记忆单元中输入门(Input Gate)、遗忘门(Forget Gate)以及输出门(Output Gate)通过控制单元组成非线性求和单元。其中输入门、遗忘门以及输出门三个控制门的激活函数为Sigmoid函数,通过该函数实现控制门“开”和“关”状态的改变。

    下图为LSTM模型中记忆模块的内部结构图:

 

图2 LSTM网络的记忆单元内部结构

    从图2的结构图可知,LSTM的记忆单元的工作原理为,当输入门进入”开“状态,那么外部信息由记忆单元读取信息,当输入门进入“关”状态,那么外部信息无法进入记忆单元。同理,遗忘门和输出门也有着相似的控制功能。LSTM模型通过这三个控制门将各种梯度信息长久的保存在记忆单元中。当记忆单元进行信息的长时间保存的时候,其遗忘门处于“开”状态,输入门处于“关”状态。

    当输入门进入“开”状态之后,记忆单元开始接受到外部信息并进行存储。当输入门进入“关”状态之后,记忆单元暂停接受外部信息,同时,输出门进入“开”状态,记忆单元中保存的信息传输到后一层。而遗忘门的功能则是在必要的时候对神经元的状态进行重置。

    对于LSTM网络模型的前向传播过程,其涉及到的各个数学原理如下:

 

 2.遗忘门计算过程如下所示:

       

 3.记忆单元计算过程如下所示:

 4.输出门计算过程如下所示:

 5.记忆单元输出计算过程如下所示:

对于LSTM网络模型的反向传播过程,其涉及到的各个数学原理如下:

 6.输入门计算过程如下所示:

    基于LSTM网络的视觉识别算法,其整体算法流程图如下图所示:

                                

 

3基于LSTM网络的视觉识别算法流程图

根据图3的算法流程图,本文所要研究的基于LSTM网络的视觉识别算法步骤为:

    步骤一:图像的采集,本文以人脸图像为研究对象。

    步骤二:图像预处理,根据本章2节的内容对所需要识别的视觉图像进行预处理,获得较为清晰的图像。

    步骤三:图像分割,将图像进行分割,分割大小根据采集图像的识别目标和整体场景大小关系进行确定,将原始的图像分割为大小的子图像。

    步骤四:子图几何元素提取,通过边缘提取方法,获得每个子图中所包含的几何元素,并将各个几何元素构成句子信息。

    步骤五:将句子信息输入到LSTM网络,这个步骤也是核心环节,下面对LSTM网络的识别过程进行介绍。首先,将句子信息通过LSTM的输入层输入到LSTM网络中,基本结构图如下图所示:

3基于LSTM网络的识别结构图

    这里假设LSTM某一时刻的输入特征信息和输出结果为和,其记忆模块中的输入和输出为和,和表示LSTM神经元的激活函数的输出和隐含层的输出,整个LSTM的训练流程为:

3.核心代码


  1. function nn = func_LSTM(train_x,train_y,test_x,test_y);
  2. binary_dim = 8;
  3. largest_number = 2^binary_dim - 1;
  4. binary = cell(largest_number, 1);
  5. for i = 1:largest_number + 1
  6. binary{i} = dec2bin(i-1, binary_dim);
  7. int2binary{i} = binary{i};
  8. end
  9. %input variables
  10. alpha = 0.000001;
  11. input_dim = 2;
  12. hidden_dim = 32;
  13. output_dim = 1;
  14. %initialize neural network weights
  15. %in_gate = sigmoid(X(t) * U_i + H(t-1) * W_i)
  16. U_i = 2 * rand(input_dim, hidden_dim) - 1;
  17. W_i = 2 * rand(hidden_dim, hidden_dim) - 1;
  18. U_i_update = zeros(size(U_i));
  19. W_i_update = zeros(size(W_i));
  20. %forget_gate = sigmoid(X(t) * U_f + H(t-1) * W_f)
  21. U_f = 2 * rand(input_dim, hidden_dim) - 1;
  22. W_f = 2 * rand(hidden_dim, hidden_dim) - 1;
  23. U_f_update = zeros(size(U_f));
  24. W_f_update = zeros(size(W_f));
  25. %out_gate = sigmoid(X(t) * U_o + H(t-1) * W_o)
  26. U_o = 2 * rand(input_dim, hidden_dim) - 1;
  27. W_o = 2 * rand(hidden_dim, hidden_dim) - 1;
  28. U_o_update = zeros(size(U_o));
  29. W_o_update = zeros(size(W_o));
  30. %g_gate = tanh(X(t) * U_g + H(t-1) * W_g)
  31. U_g = 2 * rand(input_dim, hidden_dim) - 1;
  32. W_g = 2 * rand(hidden_dim, hidden_dim) - 1;
  33. U_g_update = zeros(size(U_g));
  34. W_g_update = zeros(size(W_g));
  35. out_para = 2 * zeros(hidden_dim, output_dim) ;
  36. out_para_update = zeros(size(out_para));
  37. % C(t) = C(t-1) .* forget_gate + g_gate .* in_gate
  38. % S(t) = tanh(C(t)) .* out_gate
  39. % Out = sigmoid(S(t) * out_para)
  40. %train
  41. iter = 9999; % training iterations
  42. for j = 1:iter
  43. % generate a simple addition problem (a + b = c)
  44. a_int = randi(round(largest_number/2)); % int version
  45. a = int2binary{a_int+1}; % binary encoding
  46. b_int = randi(floor(largest_number/2)); % int version
  47. b = int2binary{b_int+1}; % binary encoding
  48. % true answer
  49. c_int = a_int + b_int; % int version
  50. c = int2binary{c_int+1}; % binary encoding
  51. % where we'll store our best guess (binary encoded)
  52. d = zeros(size(c));
  53. % total error
  54. overallError = 0;
  55. % difference in output layer, i.e., (target - out)
  56. output_deltas = [];
  57. % values of hidden layer, i.e., S(t)
  58. hidden_layer_values = [];
  59. cell_gate_values = [];
  60. % initialize S(0) as a zero-vector
  61. hidden_layer_values = [hidden_layer_values; zeros(1, hidden_dim)];
  62. cell_gate_values = [cell_gate_values; zeros(1, hidden_dim)];
  63. % initialize memory gate
  64. % hidden layer
  65. H = [];
  66. H = [H; zeros(1, hidden_dim)];
  67. % cell gate
  68. C = [];
  69. C = [C; zeros(1, hidden_dim)];
  70. % in gate
  71. I = [];
  72. % forget gate
  73. F = [];
  74. % out gate
  75. O = [];
  76. % g gate
  77. G = [];
  78. % start to process a sequence, i.e., a forward pass
  79. % Note: the output of a LSTM cell is the hidden_layer, and you need to
  80. for position = 0:binary_dim-1
  81. % X ------> input, size: 1 x input_dim
  82. X = [a(binary_dim - position)-'0' b(binary_dim - position)-'0'];
  83. % y ------> label, size: 1 x output_dim
  84. y = [c(binary_dim - position)-'0']';
  85. % use equations (1)-(7) in a forward pass. here we do not use bias
  86. in_gate = sigmoid(X * U_i + H(end, :) * W_i); % equation (1)
  87. forget_gate = sigmoid(X * U_f + H(end, :) * W_f); % equation (2)
  88. out_gate = sigmoid(X * U_o + H(end, :) * W_o); % equation (3)
  89. g_gate = tanh(X * U_g + H(end, :) * W_g); % equation (4)
  90. C_t = C(end, :) .* forget_gate + g_gate .* in_gate; % equation (5)
  91. H_t = tanh(C_t) .* out_gate; % equation (6)
  92. % store these memory gates
  93. I = [I; in_gate];
  94. F = [F; forget_gate];
  95. O = [O; out_gate];
  96. G = [G; g_gate];
  97. C = [C; C_t];
  98. H = [H; H_t];
  99. % compute predict output
  100. pred_out = sigmoid(H_t * out_para);
  101. % compute error in output layer
  102. output_error = y - pred_out;
  103. % compute difference in output layer using derivative
  104. % output_diff = output_error * sigmoid_output_to_derivative(pred_out);
  105. output_deltas = [output_deltas; output_error];
  106. % compute total error
  107. overallError = overallError + abs(output_error(1));
  108. % decode estimate so we can print it out
  109. d(binary_dim - position) = round(pred_out);
  110. end
  111. % from the last LSTM cell, you need a initial hidden layer difference
  112. future_H_diff = zeros(1, hidden_dim);
  113. % stare back-propagation, i.e., a backward pass
  114. % the goal is to compute differences and use them to update weights
  115. % start from the last LSTM cell
  116. for position = 0:binary_dim-1
  117. X = [a(position+1)-'0' b(position+1)-'0'];
  118. % hidden layer
  119. H_t = H(end-position, :); % H(t)
  120. % previous hidden layer
  121. H_t_1 = H(end-position-1, :); % H(t-1)
  122. C_t = C(end-position, :); % C(t)
  123. C_t_1 = C(end-position-1, :); % C(t-1)
  124. O_t = O(end-position, :);
  125. F_t = F(end-position, :);
  126. G_t = G(end-position, :);
  127. I_t = I(end-position, :);
  128. % output layer difference
  129. output_diff = output_deltas(end-position, :);
  130. % H_t_diff = (future_H_diff * (W_i' + W_o' + W_f' + W_g') + output_diff * out_para') ...
  131. % .* sigmoid_output_to_derivative(H_t);
  132. % H_t_diff = output_diff * (out_para') .* sigmoid_output_to_derivative(H_t);
  133. H_t_diff = output_diff * (out_para') .* sigmoid_output_to_derivative(H_t);
  134. % out_para_diff = output_diff * (H_t) * sigmoid_output_to_derivative(out_para);
  135. out_para_diff = (H_t') * output_diff;
  136. % out_gate diference
  137. O_t_diff = H_t_diff .* tanh(C_t) .* sigmoid_output_to_derivative(O_t);
  138. % C_t difference
  139. C_t_diff = H_t_diff .* O_t .* tan_h_output_to_derivative(C_t);
  140. % forget_gate_diffeence
  141. F_t_diff = C_t_diff .* C_t_1 .* sigmoid_output_to_derivative(F_t);
  142. % in_gate difference
  143. I_t_diff = C_t_diff .* G_t .* sigmoid_output_to_derivative(I_t);
  144. % g_gate difference
  145. G_t_diff = C_t_diff .* I_t .* tan_h_output_to_derivative(G_t);
  146. % differences of U_i and W_i
  147. U_i_diff = X' * I_t_diff .* sigmoid_output_to_derivative(U_i);
  148. W_i_diff = (H_t_1)' * I_t_diff .* sigmoid_output_to_derivative(W_i);
  149. % differences of U_o and W_o
  150. U_o_diff = X' * O_t_diff .* sigmoid_output_to_derivative(U_o);
  151. W_o_diff = (H_t_1)' * O_t_diff .* sigmoid_output_to_derivative(W_o);
  152. % differences of U_o and W_o
  153. U_f_diff = X' * F_t_diff .* sigmoid_output_to_derivative(U_f);
  154. W_f_diff = (H_t_1)' * F_t_diff .* sigmoid_output_to_derivative(W_f);
  155. % differences of U_o and W_o
  156. U_g_diff = X' * G_t_diff .* tan_h_output_to_derivative(U_g);
  157. W_g_diff = (H_t_1)' * G_t_diff .* tan_h_output_to_derivative(W_g);
  158. % update
  159. U_i_update = U_i_update + U_i_diff;
  160. W_i_update = W_i_update + W_i_diff;
  161. U_o_update = U_o_update + U_o_diff;
  162. W_o_update = W_o_update + W_o_diff;
  163. U_f_update = U_f_update + U_f_diff;
  164. W_f_update = W_f_update + W_f_diff;
  165. U_g_update = U_g_update + U_g_diff;
  166. W_g_update = W_g_update + W_g_diff;
  167. out_para_update = out_para_update + out_para_diff;
  168. end
  169. U_i = U_i + U_i_update * alpha;
  170. W_i = W_i + W_i_update * alpha;
  171. U_o = U_o + U_o_update * alpha;
  172. W_o = W_o + W_o_update * alpha;
  173. U_f = U_f + U_f_update * alpha;
  174. W_f = W_f + W_f_update * alpha;
  175. U_g = U_g + U_g_update * alpha;
  176. W_g = W_g + W_g_update * alpha;
  177. out_para = out_para + out_para_update * alpha;
  178. U_i_update = U_i_update * 0;
  179. W_i_update = W_i_update * 0;
  180. U_o_update = U_o_update * 0;
  181. W_o_update = W_o_update * 0;
  182. U_f_update = U_f_update * 0;
  183. W_f_update = W_f_update * 0;
  184. U_g_update = U_g_update * 0;
  185. W_g_update = W_g_update * 0;
  186. out_para_update = out_para_update * 0;
  187. end
  188. nn = newgrnn(train_x',train_y(:,1)',mean(mean(abs(out_para)))/2);

4.操作步骤与仿真结论

    通过本文的LSTM网络识别算法,对不同干扰大小采集得到的人脸进行识别,其识别正确率曲线如下图所示:

 

    从图2的仿真结果可知,随着对采集图像干扰的减少,本文所研究的LSTM识别算法具有最好的识别准确率,RNN神经网络与基于卷积的深度神经网络,其识别率相当,普通的神经网络,其识别率性能明显较差。具体的识别率大小如下表所示:

1 四种对比算法的识别率

算法

-15db

-10db

-5db

0db

5db

10db

15db

NN

17.5250

30.9500

45.0000

52.6000

55.4750

57.5750

57.6000

RBM

19.4000

40.4500

58.4750

67.9500

70.4000

72.2750

71.8750

RNN

20.6750

41.1500

60.0750

68.6000

72.5500

73.3500

73.3500

LSTM

23.1000

46.3500

65.0250

72.9500

75.6000

76.1000

76.3250

5.参考文献

[01]米良川,杨子夫,李德升等.自动机器人视觉控制系统[J].工业控制计算机.2003.3.

[02]Or1ando,Fla.Digital Image Processing Techniques.Academic Pr,Inc.1984

[03]K.Fukushima.A neural network model for selective attention in visual pattern recognition. Biological Cybernetics[J]October 1986‑55(1):5-15.

[04]T.H.Hidebrandt Optimal Training of Thresholded Linear Correlation Classifiers[J]. IEEE Transaction Neural Networks.1991‑2(6):577-588.

[05]Van Ooyen B.Nienhuis Pattern Recognition in the Neocognitron Is Improved by Neural Adaption[J].Biological Cybernetics.1993,70:47-53.

[06]Bao Qing Li BaoXinLi. Building pattern classifiers using convolutional neural networks[J]. Neural.Networks‑vol.5(3): 3081-3085.

[07]E S ackinger‑,B boser,Y lecun‑,L jaclel. Application of the ANNA Neural Network Chip to High Speed Character Recognition[J]. IEEE Transactions on Neural Networks 1992.3:498-505.

A05-40

6.完整源码获得方式

方式1:微信或者QQ联系博主

方式2:订阅MATLAB/FPGA教程,免费获得教程案例以及任意2份完整源码

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】
推荐阅读
相关标签
  

闽ICP备14008679号