赞
踩
所谓神经网络,本质上也是用训练样本构造出一个回归模型,与经典回归模型所不同的是它的函数形式并不简洁,而是多重函数的嵌套,因此它无法给出数据之间关系的直观描述,而给人一种黑盒的感觉。但其价值在于,理论上可以拟合多输入多输出的任意非线性函数,故而在回归、聚类等多领域都可应用。而神经网络中又以反向传播神经网络最为经典,此文将给出此类神经网络的基本原理和使用方法。
根据生物神经元细胞的构造抽象出单一神经元数学模型,如图1所示:
多个外界输入(
x
1
∼
x
n
x_1 \sim x_n
x1∼xn)经分别加权后求和,并做一定偏移(
b
b
b),输入到一个非线性激励函数(
f
f
f)中,最后得到该神经元的输出值(
o
o
o)。其数学表达式为:
o
=
f
(
∑
i
=
1
n
w
i
x
i
+
b
)
=
f
(
w
T
x
+
b
)
o = f \left ({\sum_{i=1}^n w_i x_i} + b \right ) = f(\boldsymbol w^T \boldsymbol x + b)
o=f(i=1∑nwixi+b)=f(wTx+b)由于进入到每个神经元各外部输入都是经过一个线性组合,即组成一个线性函数后再作为自变量进入到该神经元的的激励函数,如果激励也是一个线性函数,那就成了线性函数的线性函数,最终还是线性函数,如此一来,无论多少个神经元组成的神经网络也只能化归成一个线性函数。因此激励函数必须是非线性的,才能通过多层神经元组合连接,加权嵌套出任意非线性函数,实现复杂问题处理。激励函数一般选用以下三种类型:
名称 | 特征 | 表达式 | 图像 |
---|---|---|---|
Sigmoid函数 | 正值,可微,类阶跃 | f ( x ) = 1 1 + e − x f(x)={1 \over 1+e^{-x}} f(x)=1+e−x1 | |
双曲正切函数 | 零均值,可微,类阶跃 | f ( x ) = tanh ( x ) f(x)=\tanh(x) f(x)=tanh(x) | |
高斯函数 | 正值,可微,类脉冲 | f ( x ) = exp ( − x 2 σ 2 ) f(x)=\exp(-{x^2 \over \sigma^2}) f(x)=exp(−σ2x2) |
由各激励函数图像可见,只在0附近会产生显著的响应差异,所以要在激励函数的输入中引入偏移量( b b b),从而相当于对响应函数做了适当平移。
当多个神经元形成多层交叉连接时,即形成神经网络。前一层神经元的输出成为后一层神经元的输入,最后一层直接给出整个网络输出的神经元称为输出层,输出层前面的各层称为隐含层。当一个神经网络有一个以上的隐含层时,称为深度学习网络。此文以一个含有两个隐含层(共三层神经元)的深度学习网络为例做阐述,其构造如图2所示,其运算和训练过程可推广至更多层网络。
该网络的入参是一个
n
n
n维向量
x
=
[
x
1
x
2
⋯
x
n
]
\boldsymbol x = \left [
w
(
1
)
=
[
w
11
(
1
)
w
21
(
1
)
⋯
w
n
1
(
1
)
w
12
(
1
)
w
22
(
1
)
⋯
w
n
2
(
1
)
⋮
⋮
⋮
⋮
w
1
k
(
1
)
w
2
k
(
1
)
⋯
w
n
k
(
1
)
]
b
(
1
)
=
[
b
1
(
1
)
b
2
(
1
)
⋮
b
k
(
1
)
]
z
(
1
)
=
[
z
1
(
1
)
z
2
(
1
)
⋮
z
k
(
1
)
]
o
(
1
)
=
[
o
1
(
1
)
o
2
(
1
)
⋮
o
k
(
1
)
]
\boldsymbol w^{(1)} = \left [
基于上述定义,该神经网络的运算过程可表示为:
z
(
1
)
=
w
(
1
)
⊗
x
+
b
(
1
)
,
o
(
1
)
=
f
(
z
(
1
)
)
\boldsymbol z^{(1)} =\boldsymbol w^{(1)} \otimes \boldsymbol x +\boldsymbol b^{(1)}, \quad \boldsymbol o^{(1)} = f(\boldsymbol z^{(1)})
z(1)=w(1)⊗x+b(1),o(1)=f(z(1))
z
(
2
)
=
w
(
2
)
⊗
o
(
1
)
+
b
(
2
)
,
o
(
2
)
=
f
(
z
(
2
)
)
\boldsymbol z^{(2)} =\boldsymbol w^{(2)} \otimes \boldsymbol o^{(1)} +\boldsymbol b^{(2)}, \quad \boldsymbol o^{(2)} = f(\boldsymbol z^{(2)})
z(2)=w(2)⊗o(1)+b(2),o(2)=f(z(2))
z
(
3
)
=
w
(
3
)
⊗
o
(
2
)
+
b
(
3
)
,
o
(
3
)
=
f
(
z
(
3
)
)
\boldsymbol z^{(3)} =\boldsymbol w^{(3)} \otimes \boldsymbol o^{(2)} +\boldsymbol b^{(3)}, \quad \boldsymbol o^{(3)} = f(\boldsymbol z^{(3)})
z(3)=w(3)⊗o(2)+b(3),o(3)=f(z(3))对于其中运算符
“
⊗
”
“\otimes”
“⊗”,当将其定义为做内积时,就是普通BP网络;当将其定义为右边向量与左边矩阵各行向量求欧氏距时,就成了径向基网络(Radial Basis Funtion, RBF),一种有监督聚类网络,此时如果神经元的激励函数是高斯函数,则相当于其权重向量越是离输入向量距离近的神经元越会被激活。
目前看来,神经网络运算的原理并不复杂,在实际应用中也确实不会对运算资源带来多大消耗。非常消耗计算资源的是网络的训练过程,其原理也相对复杂,需要使得其可以每次训练中找到不足的原因,并做出改进,直到可以对每个输入都能给出完美输出。本章将阐述如何定量输出与预想结果的不足,如何迭代改进各神经元的权重和偏移量使得神经网络最终输出预想结果。
此处定义代价函数为一个训练样本集中各样本训练误差的均值,用来定量描述每组训练中神经网络表现不佳的程度。设一个样本量为
t
t
t的训练样本集合,其入参向量集为
{
x
1
,
x
2
,
⋯
,
x
t
}
\{ \boldsymbol x_1, \boldsymbol x_2, \cdots , \boldsymbol x_t \}
{x1,x2,⋯,xt},其中每个入参是一个
n
n
n维向量;对应的出参向量集为
{
y
1
,
y
2
,
⋯
,
y
t
}
\{ \boldsymbol y_1, \boldsymbol y_2, \cdots , \boldsymbol y_t \}
{y1,y2,⋯,yt},其中每个出参是一个
m
m
m维向量。则代价函数为:
J
(
w
(
1
)
,
w
(
2
)
,
w
(
3
)
,
b
(
1
)
,
b
(
2
)
,
b
(
3
)
)
=
1
t
∑
i
=
1
t
L
i
J(\boldsymbol w^{(1)} , \boldsymbol w^{(2)} , \boldsymbol w^{(3)} , \boldsymbol b^{(1)} , \boldsymbol b^{(2)} , \boldsymbol b^{(3)}) = {1 \over t} \sum_{i=1}^t L_i
J(w(1),w(2),w(3),b(1),b(2),b(3))=t1i=1∑tLi其中
L
i
L_i
Li表示第
i
i
i个训练样本的训练误差,为方便后续计算,将其定义为该样本出参向量与预测向量欧氏距平方的一半,即
L
i
(
w
(
1
)
,
w
(
2
)
,
w
(
3
)
,
b
(
1
)
,
b
(
2
)
,
b
(
3
)
)
=
1
2
∥
y
i
−
y
^
i
∥
2
=
1
2
∑
j
=
1
m
(
y
i
j
−
y
^
i
j
)
2
L_i (\boldsymbol w^{(1)} , \boldsymbol w^{(2)} , \boldsymbol w^{(3)} , \boldsymbol b^{(1)} , \boldsymbol b^{(2)} , \boldsymbol b^{(3)}) = {1 \over 2} \| \boldsymbol y_i - \hat \boldsymbol y_i \|^2 = {1 \over 2} \sum_{j=1}^m (y_{ij} - \hat y_{ij})^2
Li(w(1),w(2),w(3),b(1),b(2),b(3))=21∥yi−y^i∥2=21j=1∑m(yij−y^ij)2由于代价函数就是一个由各层神经元各权重和各偏移量构成的多元函数,训练神经网络的目的在于找到一组使得该代价函数取得最小值的各层神经元权重矩阵和偏移向量。这里要引入梯度下降法。梯度就是由多元函数各自变量的偏导数组成的向量,一个自变量的偏导数反应了该自变量对函数值增长的影响度,所以一个多元函数的梯度指向的就是在多维空间中该函数值增长最快的方向,而反方向就是函数值下降最快的方向,沿此方向移动自变量,即所谓梯度下降。根据每次迭代训练计算代价函数梯度时所选用的训练样本集合不同,分为三种梯度下降方式:
由上述看来,训练神经网络的关键是先求得其代价函数的梯度,才能沿梯度下降方向一步步优化网络参数。由于和的导数就等于导数的和,即
d
d
x
[
f
1
(
x
)
+
f
2
(
x
)
]
=
d
d
x
f
1
(
x
)
+
d
d
x
f
2
(
x
)
{d \over dx}[f_1(x)+f_2(x)] = {d \over dx}f_1(x) + {d \over dx}f_2(x)
dxd[f1(x)+f2(x)]=dxdf1(x)+dxdf2(x)由于梯度是各偏导数组成的向量,因此同样适用上述性质,因而
∇
J
=
1
t
∑
i
=
1
t
∇
L
i
\nabla J = {1 \over t} \sum_{i=1}^t \nabla L_i
∇J=t1i=1∑t∇Li对于其中任一个训练样本,其误差函数的梯度为:
∇
L
=
{
∂
L
∂
w
(
1
)
∂
L
∂
w
(
2
)
∂
L
∂
w
(
3
)
∂
L
∂
b
(
1
)
∂
L
∂
b
(
2
)
∂
L
∂
b
(
3
)
}
\nabla L = \left \{
d
f
d
x
=
d
f
d
g
d
g
d
x
{df \over dx} = {df \over dg} {dg \over dx}
dxdf=dgdfdxdg换成偏导数同样适用此法则,因此
∂
L
∂
w
p
q
(
1
)
=
∂
L
∂
z
q
(
1
)
∂
z
q
(
1
)
∂
w
p
q
(
1
)
{\partial L \over \partial w_{pq}^{(1)}} = {\partial L \over \partial z_q^{(1)}} {\partial z_q^{(1)} \over \partial w_{pq}^{(1)}}
∂wpq(1)∂L=∂zq(1)∂L∂wpq(1)∂zq(1)以此类推有
∂
L
∂
w
(
1
)
=
[
∂
L
∂
w
11
(
1
)
∂
L
∂
w
21
(
1
)
⋯
∂
L
∂
w
n
1
(
1
)
∂
L
∂
w
12
(
1
)
∂
L
∂
w
22
(
1
)
⋯
∂
L
∂
w
n
2
(
1
)
⋮
⋮
⋮
⋮
∂
L
∂
w
1
k
(
1
)
∂
L
∂
w
2
k
(
1
)
⋯
∂
L
∂
w
n
k
(
1
)
]
=
[
∂
L
∂
z
1
(
1
)
∂
z
1
(
1
)
∂
w
11
(
1
)
∂
L
∂
z
1
(
1
)
∂
z
1
(
1
)
∂
w
21
(
1
)
⋯
∂
L
∂
z
1
(
1
)
∂
z
1
(
1
)
∂
w
n
1
(
1
)
∂
L
∂
z
2
(
1
)
∂
z
2
(
1
)
∂
w
12
(
1
)
∂
L
∂
z
2
(
1
)
∂
z
2
(
1
)
∂
w
22
(
1
)
⋯
∂
L
∂
z
2
(
1
)
∂
z
2
(
1
)
∂
w
n
2
(
1
)
⋮
⋮
⋮
⋮
∂
L
∂
z
k
(
1
)
∂
z
k
(
1
)
∂
w
1
k
(
1
)
∂
L
∂
z
k
(
1
)
∂
z
k
(
1
)
∂
w
2
k
(
1
)
⋯
∂
L
∂
z
k
(
1
)
∂
z
k
(
1
)
∂
w
n
k
(
1
)
]
{\partial L \over \partial \boldsymbol w^{(1)}} = \left [
∂
z
1
(
1
)
∂
w
p
1
(
1
)
=
∂
z
2
(
1
)
∂
w
p
2
(
1
)
=
⋯
=
∂
z
k
(
1
)
∂
w
p
k
(
1
)
=
x
p
{\partial z_1^{(1)} \over \partial w_{p1}^{(1)}} = {\partial z_2^{(1)} \over \partial w_{p2}^{(1)}} = \cdots = {\partial z_k^{(1)} \over \partial w_{pk}^{(1)}} = x_p
∂wp1(1)∂z1(1)=∂wp2(1)∂z2(1)=⋯=∂wpk(1)∂zk(1)=xp所以
∂
L
∂
w
(
1
)
=
[
∂
L
∂
z
1
(
1
)
∂
L
∂
z
2
(
1
)
⋮
∂
L
∂
z
k
(
1
)
]
[
∂
z
q
(
1
)
∂
w
1
q
(
1
)
∂
z
q
(
1
)
∂
w
2
q
(
1
)
⋯
∂
z
q
(
1
)
∂
w
n
q
(
1
)
]
=
[
∂
L
∂
z
1
(
1
)
∂
L
∂
z
2
(
1
)
⋮
∂
L
∂
z
k
(
1
)
]
[
x
1
x
2
⋯
x
n
]
=
δ
(
1
)
x
T
{\partial L \over \partial \boldsymbol w^{(1)}} = \left [
∂
L
∂
w
(
2
)
=
δ
(
2
)
(
o
(
1
)
)
T
{\partial L \over \partial \boldsymbol w^{(2)}} = \boldsymbol \delta^{(2)} (\boldsymbol o^{(1)})^T
∂w(2)∂L=δ(2)(o(1))T
∂
L
∂
w
(
3
)
=
δ
(
3
)
(
o
(
2
)
)
T
{\partial L \over \partial \boldsymbol w^{(3)}} = \boldsymbol \delta^{(3)} (\boldsymbol o^{(2)})^T
∂w(3)∂L=δ(3)(o(2))T再考虑各偏移向量的偏导数。以第一层神经元为例,由前述神经网络运算过程可知,
b
q
(
1
)
b_q^{(1)}
bq(1)仅是
z
q
(
1
)
z_q^{(1)}
zq(1)的自变量,所以有
∂
L
∂
b
q
(
1
)
=
∂
L
∂
z
q
(
1
)
∂
z
q
(
1
)
∂
b
q
(
1
)
=
∂
L
∂
z
q
(
1
)
{\partial L \over \partial b_q^{(1)}} = {\partial L \over \partial z_q^{(1)}} {\partial z_q^{(1)} \over \partial b_q^{(1)}} = {\partial L \over \partial z_q^{(1)}}
∂bq(1)∂L=∂zq(1)∂L∂bq(1)∂zq(1)=∂zq(1)∂L因为其中
∂
z
q
(
1
)
∂
b
q
(
1
)
=
1
{\partial z_q^{(1)} \over \partial b_q^{(1)}} = 1
∂bq(1)∂zq(1)=1。以此类推有
∂
L
∂
b
(
1
)
=
[
∂
L
∂
b
1
(
1
)
∂
L
∂
b
2
(
1
)
⋮
∂
L
∂
b
k
(
1
)
]
=
[
∂
L
∂
z
1
(
1
)
∂
L
∂
z
2
(
1
)
⋮
∂
L
∂
z
k
(
1
)
]
=
δ
(
1
)
{\partial L \over \partial \boldsymbol b^{(1)}} = \left [
∂
L
∂
b
(
2
)
=
δ
(
2
)
{\partial L \over \partial \boldsymbol b^{(2)}} = \boldsymbol \delta^{(2)}
∂b(2)∂L=δ(2)
∂
L
∂
b
(
3
)
=
δ
(
3
)
{\partial L \over \partial \boldsymbol b^{(3)}} = \boldsymbol \delta^{(3)}
∂b(3)∂L=δ(3)综上所述,任一组训练样本的误差函数梯度为:
∇
L
=
{
δ
(
1
)
x
T
δ
(
2
)
(
o
(
1
)
)
T
δ
(
3
)
(
o
(
2
)
)
T
δ
(
1
)
δ
(
2
)
δ
(
3
)
}
\nabla L = \left \{
∂
∂
x
1
f
[
g
1
(
x
1
,
x
2
)
,
g
2
(
x
1
,
x
2
)
]
=
∂
f
∂
g
1
∂
g
1
∂
x
1
+
∂
f
∂
g
2
∂
g
2
∂
x
1
{\partial \over \partial x_1}f[g_1(x_1,x_2),g_2(x_1,x_2)] = {\partial f \over \partial g_1}{\partial g_1 \over \partial x_1} + {\partial f \over \partial g_2}{\partial g_2 \over \partial x_1}
∂x1∂f[g1(x1,x2),g2(x1,x2)]=∂g1∂f∂x1∂g1+∂g2∂f∂x1∂g2所以有
∂
L
∂
z
1
(
1
)
=
∂
L
∂
o
1
(
1
)
∂
o
1
(
1
)
∂
z
1
(
1
)
=
∂
L
∂
o
1
(
1
)
f
′
(
z
1
(
1
)
)
=
f
′
(
z
1
(
1
)
)
[
∂
L
∂
z
1
(
2
)
∂
z
1
(
2
)
∂
o
1
(
1
)
+
∂
L
∂
z
2
(
2
)
∂
z
2
(
2
)
∂
o
1
(
1
)
+
⋯
+
∂
L
∂
z
l
(
2
)
∂
z
l
(
2
)
∂
o
1
(
1
)
]
{\partial L \over \partial z_1^{(1)}} = {\partial L \over \partial o_1^{(1)}} {\partial o_1^{(1)} \over \partial z_1^{(1)}} = {\partial L \over \partial o_1^{(1)}} f'(z_1^{(1)}) = f'(z_1^{(1)}) \left [ {\partial L \over \partial z_1^{(2)}} {\partial z_1^{(2)} \over \partial o_1^{(1)}} + {\partial L \over \partial z_2^{(2)}} {\partial z_2^{(2)} \over \partial o_1^{(1)}} + \cdots + {\partial L \over \partial z_l^{(2)}} {\partial z_l^{(2)} \over \partial o_1^{(1)}}\right]
∂z1(1)∂L=∂o1(1)∂L∂z1(1)∂o1(1)=∂o1(1)∂Lf′(z1(1))=f′(z1(1))[∂z1(2)∂L∂o1(1)∂z1(2)+∂z2(2)∂L∂o1(1)∂z2(2)+⋯+∂zl(2)∂L∂o1(1)∂zl(2)]以此类推
δ
(
1
)
=
∂
L
∂
z
(
1
)
=
[
∂
L
∂
z
1
(
1
)
∂
L
∂
z
2
(
1
)
⋮
∂
L
∂
z
k
(
1
)
]
=
[
f
′
(
z
1
(
1
)
)
f
′
(
z
2
(
1
)
)
⋱
f
′
(
z
k
(
1
)
)
]
[
∂
z
1
(
2
)
∂
o
1
(
1
)
∂
z
2
(
2
)
∂
o
1
(
1
)
⋯
∂
z
l
(
2
)
∂
o
1
(
1
)
∂
z
1
(
2
)
∂
o
2
(
1
)
∂
z
2
(
2
)
∂
o
2
(
1
)
⋯
∂
z
l
(
2
)
∂
o
2
(
1
)
⋮
⋮
⋮
⋮
∂
z
1
(
2
)
∂
o
k
(
1
)
∂
z
2
(
2
)
∂
o
k
(
1
)
⋯
∂
z
l
(
2
)
∂
o
k
(
1
)
]
[
∂
L
∂
z
1
(
2
)
∂
L
∂
z
2
(
2
)
⋮
∂
L
∂
z
l
(
2
)
]
\boldsymbol \delta^{(1)} = {\partial L \over \partial \boldsymbol z^{(1)}} = \left [
δ
(
1
)
=
∂
o
(
1
)
∂
z
(
1
)
∂
z
(
2
)
∂
o
(
1
)
∂
L
∂
z
(
2
)
=
d
i
a
g
[
f
′
(
z
(
1
)
)
]
(
w
(
2
)
)
T
δ
(
2
)
\boldsymbol \delta^{(1)} = {\partial \boldsymbol o^{(1)} \over \partial \boldsymbol z^{(1)}} {\partial \boldsymbol z^{(2)} \over \partial \boldsymbol o^{(1)}} {\partial \boldsymbol L \over \partial \boldsymbol z^{(2)}} = diag[f'(\boldsymbol z^{(1)})] (\boldsymbol w^{(2)})^T \boldsymbol \delta^{(2)}
δ(1)=∂z(1)∂o(1)∂o(1)∂z(2)∂z(2)∂L=diag[f′(z(1))](w(2))Tδ(2)同理可得
δ
(
2
)
=
∂
o
(
2
)
∂
z
(
2
)
∂
z
(
3
)
∂
o
(
2
)
∂
L
∂
z
(
3
)
=
d
i
a
g
[
f
′
(
z
(
2
)
)
]
(
w
(
3
)
)
T
δ
(
3
)
\boldsymbol \delta^{(2)} = {\partial \boldsymbol o^{(2)} \over \partial \boldsymbol z^{(2)}} {\partial \boldsymbol z^{(3)} \over \partial \boldsymbol o^{(2)}} {\partial \boldsymbol L \over \partial \boldsymbol z^{(3)}} = diag[f'(\boldsymbol z^{(2)})] (\boldsymbol w^{(3)})^T \boldsymbol \delta^{(3)}
δ(2)=∂z(2)∂o(2)∂o(2)∂z(3)∂z(3)∂L=diag[f′(z(2))](w(3))Tδ(3)可见每一层神经元的灵敏度都是前一层神经元灵敏度的自变量(即入参),这也就体现了误差从输出层反向传播到输入层,故而称这种网络为反向传播网络。正是因为这个特性,使得无论有多少层神经元,只要知道了最后一层输出层神经元的灵敏度,就可以很方便的迭代出各层神经元的灵敏度。在本文所举例的三层BPNN中,输出层的灵敏度为:
δ
(
3
)
=
∂
L
∂
z
(
3
)
=
[
∂
L
∂
z
1
(
3
)
∂
L
∂
z
2
(
3
)
⋮
∂
L
∂
z
m
(
3
)
]
\boldsymbol \delta^{(3)} = {\partial \boldsymbol L \over \partial \boldsymbol z^{(3)}} = \left [
∂
L
∂
z
1
(
3
)
=
∂
L
∂
o
1
(
3
)
∂
o
1
(
3
)
∂
z
1
(
3
)
=
∂
L
∂
y
^
1
f
′
(
z
1
(
3
)
)
=
(
y
1
−
y
^
1
)
f
′
(
z
1
(
3
)
)
{\partial L \over \partial z_1^{(3)}} = {\partial L \over \partial o_1^{(3)}} {\partial o_1^{(3)} \over \partial z_1^{(3)}} = {\partial L \over \partial \hat y_1} f'(z_1^{(3)}) = (y_1-\hat y_1) f'(z_1^{(3)})
∂z1(3)∂L=∂o1(3)∂L∂z1(3)∂o1(3)=∂y^1∂Lf′(z1(3))=(y1−y^1)f′(z1(3))以此类推
δ
(
3
)
=
∂
L
∂
z
(
3
)
=
[
∂
L
∂
z
1
(
3
)
∂
L
∂
z
2
(
3
)
⋮
∂
L
∂
z
m
(
3
)
]
=
[
f
′
(
z
1
(
3
)
)
f
′
(
z
2
(
3
)
)
⋱
f
′
(
z
m
(
3
)
)
]
[
y
1
−
y
^
1
y
2
−
y
^
2
⋮
y
m
−
y
^
m
]
=
d
i
a
g
[
f
′
(
z
(
3
)
)
]
(
y
−
y
^
)
\boldsymbol \delta^{(3)} = {\partial \boldsymbol L \over \partial \boldsymbol z^{(3)}} = \left [
∇
L
=
{
∂
L
∂
w
(
1
)
∂
L
∂
w
(
2
)
∂
L
∂
w
(
3
)
∂
L
∂
b
(
1
)
∂
L
∂
b
(
2
)
∂
L
∂
b
(
3
)
}
=
{
δ
(
1
)
x
T
δ
(
2
)
(
o
(
1
)
)
T
δ
(
3
)
(
o
(
2
)
)
T
δ
(
1
)
δ
(
2
)
δ
(
3
)
}
=
{
d
i
a
g
[
f
′
(
z
(
1
)
)
]
(
w
(
2
)
)
T
d
i
a
g
[
f
′
(
z
(
2
)
)
]
(
w
(
3
)
)
T
d
i
a
g
[
f
′
(
z
(
3
)
)
]
(
y
−
y
^
)
x
T
d
i
a
g
[
f
′
(
z
(
2
)
)
]
(
w
(
3
)
)
T
d
i
a
g
[
f
′
(
z
(
3
)
)
]
(
y
−
y
^
)
(
o
(
1
)
)
T
d
i
a
g
[
f
′
(
z
(
3
)
)
]
(
y
−
y
^
)
(
o
(
2
)
)
T
d
i
a
g
[
f
′
(
z
(
1
)
)
]
(
w
(
2
)
)
T
d
i
a
g
[
f
′
(
z
(
2
)
)
]
(
w
(
3
)
)
T
d
i
a
g
[
f
′
(
z
(
3
)
)
]
(
y
−
y
^
)
d
i
a
g
[
f
′
(
z
(
2
)
)
]
(
w
(
3
)
)
T
d
i
a
g
[
f
′
(
z
(
3
)
)
]
(
y
−
y
^
)
d
i
a
g
[
f
′
(
z
(
3
)
)
]
(
y
−
y
^
)
}
\nabla L = \left \{
f
′
(
x
)
=
(
1
+
e
−
x
)
−
2
e
−
x
=
f
(
x
)
[
1
−
f
(
x
)
]
f'(x) = (1+e^{-x})^{-2} e^{-x} = f(x)[1-f(x)]
f′(x)=(1+e−x)−2e−x=f(x)[1−f(x)]故而
f
′
(
z
(
i
)
)
=
d
i
a
g
[
f
(
z
(
i
)
)
]
[
1
−
f
(
z
(
i
)
)
]
=
d
i
a
g
(
o
(
i
)
)
(
1
−
o
(
i
)
)
f'(\boldsymbol z^{(i)}) = diag[f(\boldsymbol z^{(i)})][1-f(\boldsymbol z^{(i)})] = diag(\boldsymbol o^{(i)})(1-\boldsymbol o^{(i)})
f′(z(i))=diag[f(z(i))][1−f(z(i))]=diag(o(i))(1−o(i))至此,训练样本集合中的每个训练样本的误差函数梯度(
∇
L
\nabla L
∇L)都可以计算出来了,将其都代入前述代价函数梯度公式(即算数平均值)即得到代价函数梯度。
由于梯度描述的是函数值增长最快的方向,因此为找到使代价函数取得最小值的点,则需沿梯度反方向前进,即向所谓梯度下降方向对所有权重和偏移量做一小步调整,此调整的大小称为迭代步长。然后再重新计算当前梯度,再做反向调整,周而复始直到梯度值足够小为止。
归一化是一个线性变化过程。即用向量的每一个值除以这些向量的一种共同线性范数(一般可取模长、最大坐标、或是最大与最小坐标间距等),还可再做一些相对平移使其中心化。即通过线性变换将每个量都统一成同一单位(或无量纲值,如占比),从而可以互相进行比较。这就如同要比较一个学生俩次成绩的好坏,不应该比较分数,而应该比较名次一样。
由各响应函数的图像可见,神经网络的各输出值范围只能是在
[
0
,
1
]
[0,1]
[0,1]或
[
−
1
,
1
]
[-1,1]
[−1,1]之间,故而要用于处理实际问题时也必须对出参做归一化处理。对训练样本出参可做归一化处理如下:
y
。
=
y
−
min
(
y
)
max
(
y
)
−
min
(
y
)
\boldsymbol y^。= {\boldsymbol y - \min(\boldsymbol y) \over \max(\boldsymbol y) - \min(\boldsymbol y)}
y。=max(y)−min(y)y−min(y)其中
y
\boldsymbol y
y为一个训练样本出参向量,
y
。
\boldsymbol y^。
y。为归一化后的该训练样本出参向量。当已知待估计结果的值域后,如上限和下限向量分别为
m
\boldsymbol m
m和
n
\boldsymbol n
n,则对神经网络估计出的结果须做反归一化处理:
y
^
=
y
^
。
(
m
−
n
)
+
n
\hat \boldsymbol y = \hat \boldsymbol y^。(\boldsymbol m - \boldsymbol n) + \boldsymbol n
y^=y^。(m−n)+n其中
y
^
。
\hat \boldsymbol y^。
y^。为整个神经网络输出的估计向量,本身就是归一化的。
此例中根据给定的平面直角坐标系上的一系列点来训练一个BPNN,从而拟合出一条曲线。使用python语言实现。
import numpy as np
import matplotlib.pyplot as plt
# 定义一个函数
def f(x):
return np.sin(x) + 0.5*x
# 利用上述函数创建出一系列离散点,并绘制出图像
x = np.linspace(-2*np.pi, 2*np.pi, 50)
y = f(x)
plt.figure(2)
plt.plot(x, y, 'b.', label='points')
plt.legend(loc=2)
上图就是此例中要拟合的离散点,也是训练数据。
根据前述神经网络原理,可定义一个BPNN类如下:
import numpy as np import numpy.matlib as matlib import matplotlib.pyplot as plt import numpy.random as random class BPANN: ''' 1、定义一个各层神经元个数分别为neures=[k1, k2, ...]的神经网络,其中k1为作为输入的 神经元个数。 2、初始化入参W为由各神经元层之间的连接权重矩阵构成的数组;初始化入参B为由各神经元层 的偏移向量构成的数组。均默认为随机数。 3、每个神经元的激活函数默认为是Sigmoid函数。 ''' def __init__(self, neures, W=None, B=None): self.neures = neures self.W = [] # The weight matrixes of each neures layer self.B = [] # The bias vectors of each neures layer # Construct the matrixes of weight and bias of each neures layer for i in range(1, len(neures)): # A matrix of random floats follow standard normal distribution self.W.append(matlib.randn(neures[i], neures[i-1])) self.B.append(matlib.randn(neures[i], 1)) # Activation function (Sigmoid function) of each neures def actFun(self, x): # The input x should be considered as a vector return 1/(1+np.exp(-x)) # Use the ANN to get the output def work(self, X, lowerLimit=0, upperLimit=1): # Check the type of input parameters if not all(isinstance(x, float) or isinstance(x, int) for x in X): raise Exception("the type of parameters must be float or int list") else: self.Z = [] # The input vectors of each neures layer self.O = [] # The output vectors of each neures layer # The input vector of first neure layer self.Z.append(self.W[0]*np.matrix(X).T + self.B[0]) # The output vector of first neure layer self.O.append(self.actFun(self.Z[-1])) for (w, b) in zip(self.W[1:], self.B[1:]): self.Z.append(w*self.O[-1] + b) self.O.append(self.actFun(self.Z[-1])) # Renormalize and change the type of output to a list return list((self.O[-1]*(upperLimit-lowerLimit)+lowerLimit).A1) # Train the ANN def train(self, sampleX, sampleY, step, iterations=100, size=0): ''' sampleX:所有不重复训练样本的入参,一个一维列表。 sampleY:所有不重复训练样本的出参,一个与上述入参同长度的一维列表。 step:每次训练迭代的步长,一个(0, 10)的数值。 iterations:训练迭代倍数,乘以样本子集个数即为训练迭代次数,是一个大于零的整 数,默认是30倍。 size:每次迭代从样本全集中抽取的样本子集大小,最小值是1,最大值不超过样本全集 大小(若超过或值为0则取样本全集大小)的整数。 ''' # The indices list of the samples in the universal set self.uniSetIndices = list(range(0,len(sampleX))) random.shuffle(self.uniSetIndices) # Disrupt the sort of samples # The list of the indices lists of the samples in each subset self.subsetsIndices = [] if size > 0 and size <= len(sampleX): i = 0 while i < len(sampleX): self.subsetsIndices.append(self.uniSetIndices[i: i+size]) i = i + size else: self.subsetsIndices = [self.uniSetIndices] # Normalize the sampleY SYM = np.matrix(sampleY) # Use each colume's max data to consturct a matrix that has the same shape maxData = matlib.ones((SYM.shape[0], 1)) * np.amax(SYM, axis=0) # Use each colume's min data to consturct a matrix that has the same shape minData = matlib.ones((SYM.shape[0], 1)) * np.amin(SYM, axis=0) # Normal = (original - min)/(max - min) sampleY = np.nan_to_num(np.divide((SYM - minData), (maxData - minData)), nan=1).tolist() # The list of loss expectation of each iteration's eache subset J = [] # The number of neures layers include the input layer layerNum = len(self.neures) while iterations > 0: # Dispose each subset for sI in self.subsetsIndices: # The partial derivative matrixes of each neures layer's weight self.dW = [] # The partial derivative vectors of each neures layer's bias self.dB = [] # Construct all weight and bias matrixex filled by zero for i in range(1, layerNum): self.dW.append(matlib.zeros((self.neures[i], self.neures[i-1]))) self.dB.append(matlib.zeros((self.neures[i], 1))) # Dispose each sample in the subset L = 0 # The loss of all samples in the subset subSize = len(sI) # The size of the subset for i in sI: # The sensitivity vectors of each neures layer Delta = [] # Get the current state of the ANN Y = np.matrix(self.work(sampleX[i])).T # Recalculate the loss which is defined as sum of squared errors L = L + 0.5*np.sum(np.square(Y-np.matrix(sampleY[i]).T)) # Get the sensitivity vectors of the last neure layer Delta.append(np.diag(np.multiply(self.O[-1], (1-self.O[-1])).A1) * (Y-np.matrix(sampleY[i]).T)) for j in range(layerNum-2, 0, -1): # Delta_j = diag[f'(Z_j)]*W_j.T*Delta_(j-1) Delta.append(np.diag(np.multiply(self.O[j-1], (1-self.O[j-1])).A1) * self.W[j].T*Delta[-1]) Delta.reverse() # Sum gradients at each sample in the subset self.dW[0] = self.dW[0] + Delta[0]*np.matrix(sampleX[i]) self.dB[0] = self.dB[0] + Delta[0] for j in range(1, layerNum-1): self.dW[j] = self.dW[j] + Delta[j]*self.O[j-1].T self.dB[j] = self.dB[j] + Delta[j] # Append the loss expectation of the subset J.append(L/subSize) # Adjust the weight and bias of each neures layer for i in range(0, layerNum-1): self.W[i] = self.W[i] - step*self.dW[i]/subSize self.B[i] = self.B[i] - step*self.dB[i]/subSize # Complete an iteration iterations = iterations - 1 # Plot the curve of loss decline plt.figure('loss curve') plt.plot(range(0, len(J)), J)
使用前面生成的离散数据训练此BPNN。在此构造了一个单输入单输出,包括6层神经元的神经网络。
import DataMining.BPANN as ANN import matplotlib.pyplot as plt # Transform data type for ANN training x0 = [] y0 = [] for (i, j) in zip(x, y): x0.append([i]) y0.append([j]) # Construct and train an ANN neures = [1, 60, 160, 250, 160, 60, 1] ann = ANN.BPANN(neures) ann.train(x0, y0, 0.8, 150, 20) y1 = [] for i in x0: y1.append(ann.work(i, -np.pi, np.pi)[0]) plt.figure(2) plt.plot(x, y1, 'r', label='regression ANN') plt.legend(loc=2)
训练迭代过程中损失函数的变换如下:
最后的拟合结果为:
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。