赞
踩
若激励函数是sigmoid函数,它在z=0处取得导数最大值1/4,多个相乘自然趋近于零(把w取大,对应z很大,sigmoid的导数愈接近零)
1.不会用全连接网络去做图像分类?
要考虑图像的空间结构
这就产生了卷积神经网络
2.三点:
1)local receptive fields
2)shared weights
3)pooling
28乘28的MNIST数据集为例,把输入的784个像素看成矩形结构,如图设local receptive fields大小为5乘5,每个卷积核对应一个第一隐层的神经元,若stride length(步长)为1,第一隐层就是一个24乘24的矩阵
feature map
shared weights
shared bias
kernel/filter(由shared weights和shared bias决定)
三张feature map,对应提取的三种不同特征
每个kernel由25(5乘5)个shared weights和1个shared bias决定,生成一张feature map
(相比全连接,大大减小了参数量)
通常接在卷积层之后,简化卷积层的输出信息
例,选2乘2输入的最大值
抛弃绝对位置信息,保留相对位置信息,减少参数
和的平方根
最后一层是全连接层,连接到10个(例中)输出神经元
MNIST数据集,迭代两千次运行结果:
step 0, training accuracy 0.16000
step 100, training accuracy 0.82000
step 200, training accuracy 0.94000
step 300, training accuracy 0.90000
step 400, training accuracy 0.96000
step 500, training accuracy 0.92000
step 600, training accuracy 0.94000
step 700, training accuracy 0.94000
step 800, training accuracy 0.96000
step 900, training accuracy 1.00000
step 1000, training accuracy 0.96000
step 1100, training accuracy 0.98000
step 1200, training accuracy 0.98000
step 1300, training accuracy 0.98000
step 1400, training accuracy 0.96000
step 1500, training accuracy 1.00000
step 1600, training accuracy 1.00000
step 1700, training accuracy 0.96000
step 1800, training accuracy 0.94000
step 1900, training accuracy 1.00000
test accuracy 0.97400
耗时: 237.717117 s
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。