赞
踩
DL之CNN:基于minist手写数字识别数据集利用卷积神经网络算法逐步优化(六种优化=sigmoid→1层卷积层→2层卷积层→ReLU→数据增强→Dropout)从97%逐步提高至99.6%准确率之详细攻略
目录
基于minist手写数字识别数据集利用卷积神经网络算法逐步优化(六种优化=sigmoid→1层卷积层→2层卷积层→ReLU→数据增强→Dropout)从97%逐步提高至99.6%准确率
分类 | 准确率 | 结构 |
NN+sigmoid | 97% | NN算法:sigmoid函数;准确率97% |
CNN+1层Convolution+sigmoid | 98.78% | CNN算法:1层Convolution+sigmoid函数;准确率98.78% |
CNN+2层Convolution+sigmoid | 99.06% | CNN算法:2层Convolution+sigmoid函数;准确率99.06%。层数过多并不会使准确率大幅度提高,有可能overfit,合适的层数需要通过实验验证出来,并不是越多越好 |
CNN+2层Convolution+ReLU | 99.23% | CNN算法:用Rectified Linear Units即f(z) = max(0, z),代替sigmoid函数;准确率99.23% |
CNN+2层Convolution+ReLU+增大训练集 | 99.37% | CNN算法:用ReLU函数+增大训练集25万(原先50000*5,只需将每个像素向上下左右移动一个像素);准确率99.37% |
CNN+2层Convolution+ReLU+增大训练集+dropout | 99.60% | CNN算法:用ReLU函数+增大训练集25万+dropout(随机选取一半神经元)用在最后的FullyConnected层;准确率99.60% (1)只对最后一层用dropout:CNN本身的convolution层对于overfitting有防止作用,共享的权重造成convolution filter强迫对于整个图像进行学习。 (2)为什么可以克服深度学习里面的一些困难?用CNN大大减少了参数数量 用dropout减少了overfitting 用Rectified Linear Units代替了sigmoid, 避免了overfitting, 不同层学习率差别大的问题 用GPU计算更快, 每次更新较少, 但是可以训练很多次 |
- import mnist_loader
- from network3 import Network
- from network3 import ConvPoolLayer, FullyConnectedLayer, SoftmaxLayer
-
- training_data, validation_data, test_data = mnist_loader.load_data_wrapper()
- mini_batch_size = 10
-
- #NN算法:sigmoid函数;准确率97%
- net = Network([
- FullyConnectedLayer(n_in=784, n_out=100),
- SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)
- net.SGD(training_data, 60, mini_batch_size, 0.1, validation_data, test_data)
-
- #CNN算法:1层Convolution+sigmoid函数;准确率98.78%
- net = Network([
- ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),
- filter_shape=(20, 1, 5, 5),
- poolsize=(2, 2)),
- FullyConnectedLayer(n_in=20*12*12, n_out=100),
- SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)
-
- #CNN算法:2层Convolution+sigmoid函数;准确率99.06%。层数过多并不会使准确率大幅度提高,有可能overfit,合适的层数需要通过实验验证出来,并不是越多越好
- net = Network([
- ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),
- filter_shape=(20, 1, 5, 5),
- poolsize=(2, 2)),
- ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12),
- filter_shape=(40, 20, 5, 5),
- poolsize=(2, 2)),
- FullyConnectedLayer(n_in=40*4*4, n_out=100),
- SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)
-
- #CNN算法:用Rectified Linear Units即f(z) = max(0, z),代替sigmoid函数;准确率99.23%
- net = Network([
- ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),
- filter_shape=(20, 1, 5, 5),
- poolsize=(2, 2),
- activation_fn=ReLU), #激活函数采用ReLU函数
- ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12),
- filter_shape=(40, 20, 5, 5),
- poolsize=(2, 2),
- activation_fn=ReLU),
- FullyConnectedLayer(n_in=40*4*4, n_out=100, activation_fn=ReLU),
- SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)
-
- #CNN算法:用ReLU函数+增大训练集25万(原先50000*5,只需将每个像素向上下左右移动一个像素);准确率99.37%
- $ python expand_mnist.py #将图片像素向上下左右移动
- expanded_training_data, _, _ = network3.load_data_shared("../data/mnist_expanded.pkl.gz")
- net = Network([
- ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),
- filter_shape=(20, 1, 5, 5),
- poolsize=(2, 2),
- activation_fn=ReLU),
- ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12),
- filter_shape=(40, 20, 5, 5),
- poolsize=(2, 2),
- activation_fn=ReLU),
- FullyConnectedLayer(n_in=40*4*4, n_out=100, activation_fn=ReLU),
- SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)
- net.SGD(expanded_training_data, 60, mini_batch_size, 0.03,validation_data, test_data, lmbda=0.1)
-
-
- #CNN算法:用ReLU函数+增大训练集25万+dropout(随机选取一半神经元)用在最后的FullyConnected层;准确率99.60%
- expanded_training_data, _, _ = network3.load_data_shared("../data/mnist_expanded.pkl.gz")
- net = Network([
- ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),
- filter_shape=(20, 1, 5, 5),
- poolsize=(2, 2),
- activation_fn=ReLU),
- ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12),
- filter_shape=(40, 20, 5, 5),
- poolsize=(2, 2),
- activation_fn=ReLU),
- FullyConnectedLayer(
- n_in=40*4*4, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
- FullyConnectedLayer(
- n_in=1000, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
- SoftmaxLayer(n_in=1000, n_out=10, p_dropout=0.5)],
- mini_batch_size)
- net.SGD(expanded_training_data, 40, mini_batch_size, 0.03,validation_data, test_data)
相关文章
DL之NN/CNN:NN算法进阶优化(本地数据集50000张训练集图片),六种不同优化算法实现手写数字识别逐步提高,应用案例自动驾驶之识别周围车牌号
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。