赞
踩
修改如下代码,对cifar10数据库,调整网络结构为LeNet,优化算法及其学习率,批量大小batch_size,迭代的代数epoch,分析相应的结果。
要修改的代码:
# In[1]:读取数据 from keras.datasets import mnist from keras import utils (x_train, y_train), (x_test, y_test) = mnist.load_data() y_train=utils.to_categorical(y_train,num_classes=10) y_test=utils.to_categorical(y_test,num_classes=10) x_train,x_test = x_train/255.0, x_test/255.0 # In[2]:构造网络 from keras import Sequential,layers,optimizers model = Sequential( [ layers.Reshape((28,28,1),input_shape=(28,28)), # 二维卷积操作的输入数据要求:[样本数,宽度,高度,通道数] layers.Conv2D(32, kernel_size=(3, 3), activation="relu"), # 3x3的卷积核,输出32个通道 layers.MaxPooling2D(pool_size=(2, 2)), # 取2x2网格的最大值进行下采样 layers.Conv2D(64, kernel_size=(3, 3), activation="relu"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Conv2D(64, kernel_size=(3, 3), activation="relu"), layers.Flatten(), # 把上一层得到的结果展平成一维向量(3*3*64=576) layers.Dropout(0.5), # 训练时,每个batch随机选50%的权重固定不更新 layers.Dense(64, activation="relu"), layers.Dense(10, activation="softmax"), ]) model.summary() #optimizer = optimizers.SGD(lr=0.5) # 对这个例子,SGD的识别率低一点 optimizer = optimizers.Adam(lr=0.001) #optimizer = optimizers.RMSprop(lr=0.001) model.compile(optimizer,loss='categorical_crossentropy', metrics=['accuracy']) # In[3]:训练和测试 model.fit(x_train, y_train, batch_size=64, epochs=1) loss, accuracy = model.evaluate(x_test, y_test)
调整网络结构为LeNet,把学习率改为0.005,批量大小改为32,迭代次数改为15次,可得到如下结果:
最后的识别率能达到0.7328。
# -*- coding: utf-8 -*- """ Spyder Editor This is a temporary script file. """ # In[1]:读取数据 from tensorflow.keras.datasets import cifar10 from tensorflow.keras import utils (x_train, y_train), (x_test, y_test) = cifar10.load_data() y_train=utils.to_categorical(y_train,num_classes=10) y_test=utils.to_categorical(y_test,num_classes=10) x_train,x_test = x_train/255.0, x_test/255.0 # In[2]:构造网络 from tensorflow.keras import Sequential,layers,optimizers model = Sequential( [ layers.Reshape((32,32,3),input_shape=(32,32,3)), # 二维卷积操作的输入数据要求:[样本数,宽度,高度,通道数] layers.Conv2D(32, kernel_size=(3, 3), activation="relu"), # 3x3的卷积核,输出32个通道 layers.MaxPooling2D(pool_size=(2, 2)), # 取2x2网格的最大值进行下采样 layers.Conv2D(64, kernel_size=(3, 3), activation="relu"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Conv2D(64, kernel_size=(3, 3), activation="relu"), layers.Flatten(), # 把上一层得到的结果展平成一维向量(3*3*64=576) layers.Dropout(0.5), # 训练时,每个batch随机选50%的权重固定不更新 layers.Dense(64, activation="relu"), layers.Dense(10, activation="softmax"), ]) model.summary() #optimizer = optimizers.SGD(lr=0.5) # 对这个例子,SGD的识别率低一点 optimizer = optimizers.Adam(lr=0.005) #optimizer = optimizers.RMSprop(lr=0.001) model.compile(optimizer,loss='categorical_crossentropy', metrics=['accuracy']) #model.compile(optimizer,loss='mean_squared_logarithmic_error', metrics=['accuracy']) #model.compile(optimizer,loss='mean_squared_error', metrics=['accuracy']) # In[3]:训练和测试 #批量大小batch_size 可改为2的幂次方 model.fit(x_train, y_train, batch_size=32, epochs=15) loss, accuracy = model.evaluate(x_test, y_test)
可以通过调整了batch_size和迭代次数,就能训练出较好的结果;
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。