当前位置:   article > 正文

第44步 深度学习图像识别:ResNet50建模(Tensorflow)

resnet50

基于WIN10的64位系统演示

一、写在前面

(1)ResNet50

ResNet50是一种深度学习模型,由微软研究院的研究人员在2015年提出。"ResNet"的全称是"Residual Network",意为"残差网络","50"则表示这个网络包含50层。

ResNet50的主要特点是引入了"残差块"(Residual Block)。在传统的神经网络中,每一层都是在前一层的基础上添加新的变换,而在ResNet中,每一层都是在前一层的基础上添加新的变换,同时还保留了前一层的原始输入,这就是所谓的"残差"。这种设计使得网络可以更好地学习输入和输出之间的差异,而不是直接学习输出,这有助于提高模型的性能。

 

(2)ResNet50的预训练版本

Keras有ResNet50的预训练模型,省事:

 

二、ResNet50迁移学习代码实战

我们继续胸片的数据集:肺结核病人和健康人的胸片的识别。其中,肺结核病人700张,健康人900张,分别存入单独的文件夹中。

(a)导入包

  1. from tensorflow import keras
  2. import tensorflow as tf
  3. from tensorflow.python.keras.layers import Dense, Flatten, Conv2D, MaxPool2D, Dropout, Activation, Reshape, Softmax, GlobalAveragePooling2D
  4. from tensorflow.python.keras.layers.convolutional import Convolution2D, MaxPooling2D
  5. from tensorflow.python.keras import Sequential
  6. from tensorflow.python.keras import Model
  7. from tensorflow.python.keras.optimizers import adam_v2
  8. import numpy as np
  9. import matplotlib.pyplot as plt
  10. from tensorflow.python.keras.preprocessing.image import ImageDataGenerator, image_dataset_from_directory
  11. from tensorflow.python.keras.layers.preprocessing.image_preprocessing import RandomFlip, RandomRotation, RandomContrast, RandomZoom, RandomTranslation
  12. import os,PIL,pathlib
  13. import warnings
  14. #设置GPU
  15. gpus = tf.config.list_physical_devices("GPU")
  16. if gpus:
  17. gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
  18. tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
  19. tf.config.set_visible_devices([gpu0],"GPU")
  20. warnings.filterwarnings("ignore") #忽略警告信息
  21. plt.rcParams['font.sans-serif'] = ['SimHei'] # 用来正常显示中文标签
  22. plt.rcParams['axes.unicode_minus'] = False # 用来正常显示负号

(b)导入数据集

  1. #1.导入数据
  2. data_dir = "./cat_dog"
  3. data_dir = pathlib.Path(data_dir)
  4. image_count = len(list(data_dir.glob('*/*')))
  5. print("图片总数为:",image_count)
  6. batch_size = 32
  7. img_height = 100
  8. img_width = 100
  9. train_ds = image_dataset_from_directory(
  10. data_dir,
  11. validation_split=0.2,
  12. subset="training",
  13. seed=12,
  14. image_size=(img_height, img_width),
  15. batch_size=batch_size)
  16. val_ds = image_dataset_from_directory(
  17. data_dir,
  18. validation_split=0.2,
  19. subset="validation",
  20. seed=12,
  21. image_size=(img_height, img_width),
  22. batch_size=batch_size)
  23. class_names = train_ds.class_names
  24. print(class_names)
  25. print(train_ds)
  26. #2.检查数据
  27. for image_batch, labels_batch in train_ds:
  28. print(image_batch.shape)
  29. print(labels_batch.shape)
  30. break
  31. #3.配置数据
  32. AUTOTUNE = tf.data.AUTOTUNE
  33. def train_preprocessing(image,label):
  34. return (image/255.0,label)
  35. train_ds = (
  36. train_ds.cache()
  37. .shuffle(800)
  38. .map(train_preprocessing)
  39. .prefetch(buffer_size=AUTOTUNE)
  40. )
  41. val_ds = (
  42. val_ds.cache()
  43. .map(train_preprocessing)
  44. .prefetch(buffer_size=AUTOTUNE)
  45. )
  46. #4. 数据可视化
  47. plt.figure(figsize=(10, 8))
  48. plt.suptitle("数据展示")
  49. class_names = ["Dog","Cat"]
  50. for images, labels in train_ds.take(1):
  51. for i in range(15):
  52. plt.subplot(4, 5, i + 1)
  53. plt.xticks([])
  54. plt.yticks([])
  55. plt.grid(False)
  56. plt.imshow(images[i])
  57. plt.xlabel(class_names[labels[i]-1])
  58. plt.show()

(c)数据增强

  1. data_augmentation = Sequential([
  2. RandomFlip("horizontal_and_vertical"),
  3. RandomRotation(0.2),
  4. RandomContrast(1.0),
  5. RandomZoom(0.5,0.2),
  6. RandomTranslation(0.3,0.5),
  7. ])
  8. def prepare(ds):
  9. ds = ds.map(lambda x, y: (data_augmentation(x, training=True), y), num_parallel_calls=AUTOTUNE)
  10. return ds
  11. train_ds = prepare(train_ds)

(d)导入ResNet50

  1. #获取预训练模型对输入的预处理方法
  2. from tensorflow.python.keras.applications import resnet
  3. from tensorflow.python.keras import Input
  4. IMG_SIZE = (img_height, img_width, 3)
  5. base_model = resnet.ResNet50(include_top=False, #是否包含顶层的全连接层
  6. weights='imagenet')
  7. inputs = Input(shape=IMG_SIZE)
  8. #模型
  9. x = base_model(inputs, training=False) #参数不变化
  10. #全局池化
  11. x = GlobalAveragePooling2D()(x)
  12. #BatchNormalization
  13. x = BatchNormalization()(x)
  14. #Dropout
  15. x = Dropout(0.8)(x)
  16. #Dense
  17. x = Dense(128, kernel_regularizer=regularizers.l2(0.3))(x) # 全连接层减少到128,添加 L2 正则化
  18. #BatchNormalization
  19. x = BatchNormalization()(x)
  20. #激活函数
  21. x = Activation('relu')(x)
  22. #输出层
  23. outputs = Dense(2, kernel_regularizer=regularizers.l2(0.3))(x) # 添加 L2 正则化
  24. #BatchNormalization
  25. outputs = BatchNormalization()(outputs)
  26. #激活函数
  27. outputs = Activation('sigmoid')(outputs)
  28. #整体封装
  29. model = Model(inputs, outputs)
  30. #打印模型结构
  31. print(model.summary())

然后打印出模型的结构:

(e)编译模型

  1. #定义优化器
  2. from tensorflow.python.keras.optimizers import adam_v2, rmsprop_v2
  3. #from tensorflow.python.keras.optimizer_v2.gradient_descent import SGD
  4. optimizer = adam_v2.Adam()
  5. #optimizer = SGD(learning_rate=0.001)
  6. #optimizer = rmsprop_v2.RMSprop()
  7. #编译模型
  8. model.compile(optimizer=optimizer,
  9. loss='sparse_categorical_crossentropy',
  10. metrics=['accuracy'])
  11. #训练模型
  12. from tensorflow.python.keras.callbacks import ModelCheckpoint, Callback, EarlyStopping, ReduceLROnPlateau, LearningRateScheduler
  13. NO_EPOCHS = 100
  14. PATIENCE = 10
  15. VERBOSE = 1
  16. # 设置动态学习率
  17. annealer = LearningRateScheduler(lambda x: 1e-5 * 0.99 ** (x+NO_EPOCHS))
  18. # 设置早停
  19. earlystopper = EarlyStopping(monitor='loss', patience=PATIENCE, verbose=VERBOSE)
  20. #
  21. checkpointer = ModelCheckpoint('mtb_jet_best_model_ResNet50.h5',
  22. monitor='val_accuracy',
  23. verbose=VERBOSE,
  24. save_best_only=True,
  25. save_weights_only=True)
  26. train_model = model.fit(train_ds,
  27. epochs=NO_EPOCHS,
  28. verbose=1,
  29. validation_data=val_ds,
  30. callbacks=[earlystopper, checkpointer, annealer])
  31. #保存模型
  32. model.save('mtb_jet_best_model_ResNet50.h5')
  33. print("The trained model has been saved.")

 模型训练速度也比较快。然而,准确率波动比较大:

(f)Accuracy和Loss可视化

  1. import matplotlib.pyplot as plt
  2. loss = train_model.history['loss']
  3. acc = train_model.history['accuracy']
  4. val_loss = train_model.history['val_loss']
  5. val_acc = train_model.history['val_accuracy']
  6. epoch = range(1, len(loss)+1)
  7. fig, ax = plt.subplots(1, 2, figsize=(10,4))
  8. ax[0].plot(epoch, loss, label='Train loss')
  9. ax[0].plot(epoch, val_loss, label='Validation loss')
  10. ax[0].set_xlabel('Epochs')
  11. ax[0].set_ylabel('Loss')
  12. ax[0].legend()
  13. ax[1].plot(epoch, acc, label='Train acc')
  14. ax[1].plot(epoch, val_acc, label='Validation acc')
  15. ax[1].set_xlabel('Epochs')
  16. ax[1].set_ylabel('Accuracy')
  17. ax[1].legend()
  18. plt.show()

通过这个图,观察模型训练情况:

蓝色为训练集,橙色为验证集。可以看到loss还是总体呈现下降趋势,验证集的loss虽有波动,但是可以接受。但是在准确度曲线,验证集的波动就很恐怖了,真的是冰红两重天。

(g)混淆矩阵可视化以及模型参数

没啥好说的,都跟之前的ML模型类似:

  1. import numpy as np
  2. import matplotlib.pyplot as plt
  3. from tensorflow.python.keras.models import load_model
  4. from matplotlib.pyplot import imshow
  5. from sklearn.metrics import classification_report, confusion_matrix
  6. import seaborn as sns
  7. import pandas as pd
  8. import math
  9. # 定义一个绘制混淆矩阵图的函数
  10. def plot_cm(labels, predictions):
  11. # 生成混淆矩阵
  12. conf_numpy = confusion_matrix(labels, predictions)
  13. # 将矩阵转化为 DataFrame
  14. conf_df = pd.DataFrame(conf_numpy, index=class_names ,columns=class_names)
  15. plt.figure(figsize=(8,7))
  16. sns.heatmap(conf_df, annot=True, fmt="d", cmap="BuPu")
  17. plt.title('混淆矩阵',fontsize=15)
  18. plt.ylabel('真实值',fontsize=14)
  19. plt.xlabel('预测值',fontsize=14)
  20. val_pre = []
  21. val_label = []
  22. for images, labels in val_ds:#这里可以取部分验证数据(.take(1))生成混淆矩阵
  23. for image, label in zip(images, labels):
  24. # 需要给图片增加一个维度
  25. img_array = tf.expand_dims(image, 0)
  26. # 使用模型预测图片中的人物
  27. prediction = model.predict(img_array)
  28. val_pre.append(np.argmax(prediction))
  29. val_label.append(label)
  30. plot_cm(val_label, val_pre)
  31. cm_val = confusion_matrix(val_label, val_pre)
  32. a_val = cm_val[0,0]
  33. b_val = cm_val[0,1]
  34. c_val = cm_val[1,0]
  35. d_val = cm_val[1,1]
  36. acc_val = (a_val+d_val)/(a_val+b_val+c_val+d_val) #准确率:就是被分对的样本数除以所有的样本数
  37. error_rate_val = 1 - acc_val #错误率:与准确率相反,描述被分类器错分的比例
  38. sen_val = d_val/(d_val+c_val) #灵敏度:表示的是所有正例中被分对的比例,衡量了分类器对正例的识别能力
  39. sep_val = a_val/(a_val+b_val) #特异度:表示的是所有负例中被分对的比例,衡量了分类器对负例的识别能力
  40. precision_val = d_val/(b_val+d_val) #精确度:表示被分为正例的示例中实际为正例的比例
  41. F1_val = (2*precision_val*sen_val)/(precision_val+sen_val) #F1值:P和R指标有时候会出现的矛盾的情况,这样就需要综合考虑他们,最常见的方法就是F-Measure(又称为F-Score)
  42. MCC_val = (d_val*a_val-b_val*c_val) / (math.sqrt((d_val+b_val)*(d_val+c_val)*(a_val+b_val)*(a_val+c_val))) #马修斯相关系数(Matthews correlation coefficient):当两个类别具有非常不同的大小时,可以使用MCC
  43. print("验证集的灵敏度为:",sen_val,
  44. "验证集的特异度为:",sep_val,
  45. "验证集的准确率为:",acc_val,
  46. "验证集的错误率为:",error_rate_val,
  47. "验证集的精确度为:",precision_val,
  48. "验证集的F1为:",F1_val,
  49. "验证集的MCC为:",MCC_val)
  50. train_pre = []
  51. train_label = []
  52. for images, labels in train_ds:#这里可以取部分验证数据(.take(1))生成混淆矩阵
  53. for image, label in zip(images, labels):
  54. # 需要给图片增加一个维度
  55. img_array = tf.expand_dims(image, 0)
  56. # 使用模型预测图片中的人物
  57. prediction = model.predict(img_array)
  58. train_pre.append(np.argmax(prediction))
  59. train_label.append(label)
  60. plot_cm(train_label, train_pre)
  61. cm_train = confusion_matrix(train_label, train_pre)
  62. a_train = cm_train[0,0]
  63. b_train = cm_train[0,1]
  64. c_train = cm_train[1,0]
  65. d_train = cm_train[1,1]
  66. acc_train = (a_train+d_train)/(a_train+b_train+c_train+d_train)
  67. error_rate_train = 1 - acc_train
  68. sen_train = d_train/(d_train+c_train)
  69. sep_train = a_train/(a_train+b_train)
  70. precision_train = d_train/(b_train+d_train)
  71. F1_train = (2*precision_train*sen_train)/(precision_train+sen_train)
  72. MCC_train = (d_train*a_train-b_train*c_train) / (math.sqrt((d_train+b_train)*(d_train+c_train)*(a_train+b_train)*(a_train+c_train)))
  73. print("训练集的灵敏度为:",sen_train,
  74. "训练集的特异度为:",sep_train,
  75. "训练集的准确率为:",acc_train,
  76. "训练集的错误率为:",error_rate_train,
  77. "训练集的精确度为:",precision_train,
  78. "训练集的F1为:",F1_train,
  79. "训练集的MCC为:",MCC_train)

效果只能说勉勉强强吧:

 (h)AUC曲线绘制

  1. from sklearn import metrics
  2. import numpy as np
  3. import matplotlib.pyplot as plt
  4. from tensorflow.python.keras.models import load_model
  5. from matplotlib.pyplot import imshow
  6. from sklearn.metrics import classification_report, confusion_matrix
  7. import seaborn as sns
  8. import pandas as pd
  9. import math
  10. def plot_roc(name, labels, predictions, **kwargs):
  11. fp, tp, _ = metrics.roc_curve(labels, predictions)
  12. plt.plot(fp, tp, label=name, linewidth=2, **kwargs)
  13. plt.plot([0, 1], [0, 1], color='orange', linestyle='--')
  14. plt.xlabel('False positives rate')
  15. plt.ylabel('True positives rate')
  16. ax = plt.gca()
  17. ax.set_aspect('equal')
  18. val_pre_auc = []
  19. val_label_auc = []
  20. for images, labels in val_ds:
  21. for image, label in zip(images, labels):
  22. img_array = tf.expand_dims(image, 0)
  23. prediction_auc = model.predict(img_array)
  24. val_pre_auc.append((prediction_auc)[:,1])
  25. val_label_auc.append(label)
  26. auc_score_val = metrics.roc_auc_score(val_label_auc, val_pre_auc)
  27. train_pre_auc = []
  28. train_label_auc = []
  29. for images, labels in train_ds:
  30. for image, label in zip(images, labels):
  31. img_array_train = tf.expand_dims(image, 0)
  32. prediction_auc = model.predict(img_array_train)
  33. train_pre_auc.append((prediction_auc)[:,1])#输出概率而不是标签!
  34. train_label_auc.append(label)
  35. auc_score_train = metrics.roc_auc_score(train_label_auc, train_pre_auc)
  36. plot_roc('validation AUC: {0:.4f}'.format(auc_score_val), val_label_auc , val_pre_auc , color="red", linestyle='--')
  37. plot_roc('training AUC: {0:.4f}'.format(auc_score_train), train_label_auc, train_pre_auc, color="blue", linestyle='--')
  38. plt.legend(loc='lower right')
  39. #plt.savefig("roc.pdf", dpi=300,format="pdf")
  40. print("训练集的AUC值为:",auc_score_train, "验证集的AUC值为:",auc_score_val)

ROC曲线如下:

三、调整过程

说实话,这个模型调整了很久。具体调整的方面如下:

  1. 动态学习率的初始学习数值:设置为1e-5,代码如下:

annealer = LearningRateScheduler(lambda x: 1e-5 * 0.99 ** (x+NO_EPOCHS))

(2)迭代次数增加到100次:NO_EPOCHS =100

思路就是给模型足够的时间慢慢学习,达到比较好的性能。

四、ResNet50、InceptionResnetV2、Mobilenet、Efficientnet、DenseNet201、Inception V3和VGG19的对比

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/AllinToyou/article/detail/371332
推荐阅读
相关标签
  

闽ICP备14008679号