当前位置:   article > 正文

【神经网络】(5) 卷积神经网络(ResNet50),案例:艺术画作10分类,附数据集_使用resnet50实现10种植物分类使用迁移学习构建网络

使用resnet50实现10种植物分类使用迁移学习构建网络

各位同学好,今天和大家分享一下TensorFlow2.0中如何构建卷积神经网络ResNet-50,案例内容:现在收集了10位艺术大师的画作,采用卷积神经网络判断某一幅画是哪位大师画的。

数据集:百度网盘 请输入提取码

提取码: 2h5x


1. 数据加载

在文件夹中将图片按照训练集、验证集、测试集划分好之后,使用tf.keras.preprocessing.image_dataset_from_directory()从文件夹中读取数据。指定参数label_model'int'代表目标值y是数值类型,即0, 1, 2, 3等;'categorical'代表onehot类型,对应索引的值为1,如图像属于第二类则表示为0,1,0,0,0;'binary'代表二分类

  1. import tensorflow as tf
  2. from tensorflow import keras
  3. from tensorflow.keras import Model, optimizers, layers
  4. #(1)获取数据集
  5. def get_data(height, width, batchsz):
  6. filepath1 = 'C:/Users/admin/.spyder-py3/test/数据集/艺术作品/new_data/train'
  7. train_ds = tf.keras.preprocessing.image_dataset_from_directory(
  8. filepath1,
  9. label_mode='categorical', # 做one-hot编码 ,正数形式"int", 多分类"categorical", 二分类"binary", or None
  10. seed=123,
  11. image_size=(height, width), # resize图片大小
  12. batch_size=batchsz)
  13. # 加载验证集数据
  14. filepath2 = 'C:/Users/admin/.spyder-py3/test/数据集/艺术作品/new_data/val'
  15. val_ds = tf.keras.preprocessing.image_dataset_from_directory(
  16. filepath2,
  17. label_mode='categorical',
  18. seed=123,
  19. image_size=(height, width),
  20. batch_size=batchsz)
  21. # 加载测试集数据
  22. filepath3 = 'C:/Users/admin/.spyder-py3/test/数据集/艺术作品/new_data/test'
  23. test_ds = tf.keras.preprocessing.image_dataset_from_directory(
  24. filepath3,
  25. label_mode='categorical',
  26. seed=123,
  27. image_size=(height, width),
  28. batch_size=batchsz)
  29. return(train_ds, val_ds, test_ds)
  30. # 从文件夹中获取图像
  31. train_ds, val_ds, test_ds = get_data(224, 224, 32) #指定读入图片的宽度高度(和网络输入层大小相同),每个batch的大小
  32. # 类别名称
  33. class_names = train_ds.class_names
  34. print('类别有:',class_names)
  35. # 类别有: ['Alfred_Sisley', 'Edgar_Degas', 'Francisco_Goya', 'Marc_Chagall', 'Pablo_Picasso', 'Paul_Gauguin', 'Peter_Paul_Rubens', 'Rembrandt', 'Titian', 'Vincent_van_Gogh']

2. 数据预处理

定义预先处理函数,将x的每个像素值从[0,255]映射到[-1,1],映射到[0,1]也没问题。使用.map()将数据集中的所有数据放入函数进行处理,对训练数据打乱顺序.shuffle(),但不改变x和y之间的对应关系。

  1. #(2)数据预处理
  2. def processing(x,y):
  3. x = 2 * tf.cast(x, tf.float32)/255.0 - 1 # 将每个像素值从[0,255]映射到[-1,1]
  4. y = tf.cast(y, tf.int32)
  5. return(x,y)
  6. # 构造数据集
  7. train_ds = train_ds.map(processing).shuffle(10000) # 训练数据
  8. val_ds = val_ds.map(processing) # 验证数据
  9. test_ds = test_ds.map(processing) # 测试数据
  10. # 查看数据是否处理正确
  11. sample = next(iter(train_ds)) #构造迭代器,每次运行取出一个batch数据
  12. print('x_batch.shape:', sample[0].shape, 'y_batch.shape', sample[1].shape)
  13. # x_batch.shape: (32, 128, 128, 3) y_batch.shape (32, 10)
  14. # 绘图展示
  15. import matplotlib.pyplot as plt
  16. for i in range(15):
  17. plt.subplot(3,5,i+1)
  18. plt.imshow(sample[0][i]) #sample存放的是一个batch的图像
  19. plt.xticks([]) #不显示坐标刻度
  20. plt.yticks([])
  21. plt.show()

经过处理后的图像如下:


3. 网络构建

接下来到最重要的一步了,构建ResNet50网络,网络的结构图如下:resnet50结构图 ,可以根据这个结构图慢慢敲代码,我这里使用函数的方法构建ResNet50网络。ResNet的原理解释如下:六、ResNet网络详细解析(超详细哦)

  1. #(3)构建RNN-RESNET
  2. # conv_block部分
  3. def conv_block(input_tensor, filters, stride):
  4. # 分别接收卷积核的个数,即特征图的个数
  5. filter1, filter2, filter3 = filters
  6. # ==1== 正向传播部分
  7. # 卷积层
  8. x = layers.Conv2D(filter1, kernel_size=(1,1), strides=stride)(input_tensor)
  9. # BN层
  10. x = layers.BatchNormalization()(x)
  11. # 激活层
  12. x = layers.Activation('relu')(x)
  13. # 卷积层
  14. x = layers.Conv2D(filter2, kernel_size=(3,3), strides=(1,1), padding='same')(x)
  15. # BN层
  16. x = layers.BatchNormalization()(x)
  17. # 激活函数
  18. x = layers.Activation('relu')(x)
  19. # 卷积层
  20. x = layers.Conv2D(filter3, kernel_size=(1,1), strides=(1,1))(x)
  21. # BN层
  22. x = layers.BatchNormalization()(x)
  23. # ==2== shotcut部分
  24. # 卷积层
  25. shotcut = layers.Conv2D(filter3, kernel_size=(1,1), strides=stride)(input_tensor)
  26. # BN层
  27. shotcut = layers.BatchNormalization()(shotcut)
  28. # ==3== 两部分组合
  29. x = layers.add([x, shotcut])
  30. # 激活函数
  31. x = layers.Activation('relu')(x)
  32. # 返回结果
  33. return x
  34. # identity_block部分
  35. def iden_block(input_tensor, filters):
  36. # 接收卷积核的个数
  37. filter1, filter2, filter3 = filters
  38. # ==1== 正向传播
  39. # 卷积层
  40. x = layers.Conv2D(filter1, kernel_size=(1,1), strides=(1,1))(input_tensor)
  41. # BN层
  42. x = layers.BatchNormalization()(x)
  43. # 激活函数
  44. x = layers.Activation('relu')(x)
  45. # 卷积层
  46. x = layers.Conv2D(filter2, kernel_size=(3,3), strides=(1,1), padding='same')(x)
  47. # BN层
  48. x = layers.BatchNormalization()(x)
  49. # 激活函数
  50. x = layers.Activation('relu')(x)
  51. # 卷积层
  52. x = layers.Conv2D(filter3, kernel_size=(1,1), strides=(1,1))(x)
  53. # BN层
  54. x = layers.BatchNormalization()(x)
  55. # ==2== 结合
  56. x = layers.add([x, input_tensor])
  57. # 激活函数
  58. x = layers.Activation('relu')(x)
  59. return x
  60. # 本体
  61. def resnet50(input_shape=[224,224,3], output_shape=10):
  62. # 输入层
  63. inputs = keras.Input(shape=input_shape) #[224,224,3]
  64. # padding,上下左右各三层
  65. x = layers.ZeroPadding2D((3,3))(inputs)
  66. # 卷积层
  67. x = layers.Conv2D(64, kernel_size=(7,7), strides=(2,2))(x) #[112,112,64]
  68. # BN层
  69. x = layers.BatchNormalization()(x) #[112,112,64]
  70. # relu层
  71. x = layers.Activation('relu')(x) #[112,112,64]
  72. # 池化层
  73. x = layers.MaxPool2D(pool_size=(3,3), strides=(2,2))(x) #[55,55,64]
  74. # block1
  75. x = conv_block(x, [64, 64, 256], stride=(1,1)) #[55,55,256]
  76. x = iden_block(x, [64, 64, 256]) #[55,55,256]
  77. x = iden_block(x, [64, 64, 256]) #[55,55,256]
  78. # block2
  79. x = conv_block(x, [128, 128, 256], stride=(2,2)) #[28,28,512]
  80. x = iden_block(x, [128, 128, 256]) #[28,28,512]
  81. x = iden_block(x, [128, 128, 256]) #[28,28,512]
  82. x = iden_block(x, [128, 128, 256]) #[28,28,512]
  83. # block3
  84. x = conv_block(x, [256, 256, 1024], stride=(2,2)) #[14,14,1024]
  85. x = iden_block(x, [256, 256, 1024]) #[14,14,1024]
  86. x = iden_block(x, [256, 256, 1024]) #[14,14,1024]
  87. x = iden_block(x, [256, 256, 1024]) #[14,14,1024]
  88. x = iden_block(x, [256, 256, 1024]) #[14,14,1024]
  89. x = iden_block(x, [256, 256, 1024]) #[14,14,1024]
  90. # block4
  91. x = conv_block(x, [512, 512, 2048], stride=(2,2)) #[7,7,2048]
  92. x = iden_block(x, [512, 512, 2048]) #[7,7,2048]
  93. x = iden_block(x, [512, 512, 2048]) #[7,7,2048]
  94. # 平均池化层
  95. x = layers.AveragePooling2D(pool_size=(7,7))(x) #[1,1,2048]
  96. # Flatten层
  97. x = layers.Flatten()(x) #[None,2048]
  98. # 输出层,不做softmax
  99. outputs = layers.Dense(output_shape)(x)
  100. # 构建模型
  101. model = Model(inputs=inputs, outputs=outputs)
  102. # 返回模型
  103. return model
  104. # 创建restnet-50
  105. model = resnet50()
  106. # 查看网络结构
  107. model.summary()

网络结构如下

  1. __________________________________________________________________________________________________
  2. Layer (type) Output Shape Param # Connected to
  3. ==================================================================================================
  4. input_1 (InputLayer) [(None, 224, 224, 3 0 []
  5. )]
  6. zero_padding2d (ZeroPadding2D) (None, 230, 230, 3) 0 ['input_1[0][0]']
  7. conv2d (Conv2D) (None, 112, 112, 64 9472 ['zero_padding2d[0][0]']
  8. )
  9. batch_normalization (BatchNorm (None, 112, 112, 64 256 ['conv2d[0][0]']
  10. alization) )
  11. activation (Activation) (None, 112, 112, 64 0 ['batch_normalization[0][0]']
  12. )
  13. ----------------------------------------------------------------------------------------
  14. ----------------------------------------------------------------------------------------
  15. 省略N多层
  16. ----------------------------------------------------------------------------------------
  17. ----------------------------------------------------------------------------------------
  18. activation_48 (Activation) (None, 7, 7, 2048) 0 ['add_15[0][0]']
  19. activation_49 (Activation) (None, 7, 7, 2048) 0 ['activation_48[0][0]']
  20. average_pooling2d (AveragePool (None, 1, 1, 2048) 0 ['activation_49[0][0]']
  21. ing2D)
  22. flatten (Flatten) (None, 2048) 0 ['average_pooling2d[0][0]']
  23. dense (Dense) (None, 10) 20490 ['flatten[0][0]']
  24. ==================================================================================================
  25. Total params: 22,979,210
  26. Trainable params: 22,928,650
  27. Non-trainable params: 50,560
  28. __________________________________________________________________________________________________


4. 网络配置

采用动态学习率的方法,指定学习率是指数曲线下降,使网络刚开始时能更快接近最优点,后续再慢慢向逼近最优点。由于在网络的输出层没有进行softmax将实数值转为概率值,因此,在编译时使用交叉熵损失函数计算预测值和真实值的差异时,需要指定参数from_logits=True,代表将logits层输出的实数经过softmax转换为概率之后再和真实值计算损失。这样能有效提高数据稳定性。

  1. #(4)网络配置
  2. # 设置动态学习率
  3. exponential_decay = optimizers.schedules.ExponentialDecay(initial_learning_rate=0.0001, # 初始学习率
  4. decay_steps=2, # 衰减步长
  5. decay_rate=0.95) # 衰减率0.95
  6. # 编译
  7. model.compile(optimizer=optimizers.Adam(learning_rate=exponential_decay), #指定学习率
  8. #需要对真实值y进行onehot,而sparse_categorical_crossentropy会自动进行onehot
  9. loss = tf.losses.CategoricalCrossentropy(from_logits=True), # from_logits会自动将输出层的实数转为softmax后再计算交叉熵
  10. metrics = ['accuracy']) # 指定模型评价指标
  11. # 训练,给出训练集、验证集、循环10次、每轮循环开始之前重新洗牌
  12. model.fit(train_ds, validation_data=val_ds, epochs=10, shuffle=True)

5. 模型评估

绘制训练集和测试集的准确率和损失的对比曲线,观察是否出现过拟合现象。

  1. #(5)评估
  2. # ==1== 准确率
  3. train_acc = model.history['accuracy'] #训练集准确率
  4. val_acc = model.history['val_accuracy'] #验证集准确率
  5. # ==2== 损失
  6. train_loss = model.history['loss'] #训练集损失
  7. val_loss = model.history['val_loss'] #验证集损失
  8. # ==3== 绘图
  9. epochs_range = range(len(train_acc))
  10. plt.figure(figsize=(10,5))
  11. # 准确率
  12. plt.subplot(1,2,1)
  13. plt.plot(epochs_range, train_acc, label='train_acc')
  14. plt.plot(epochs_range, val_acc, label='val_acc')
  15. plt.legend()
  16. # 损失曲线
  17. plt.subplot(1,2,2)
  18. plt.plot(epochs_range, train_loss, label='train_loss')
  19. plt.plot(epochs_range, val_loss, label='val_loss')
  20. plt.legend()

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/92805
推荐阅读
相关标签
  

闽ICP备14008679号