当前位置:   article > 正文

基于卷积神经网络的农作物病虫害图像识别(Opencv,Pytorch,Tensorflow,MobileNetV3)

病虫害图像识别

前言:

最近做了一个农作物虫害图像识别的程序,在此分享一下。本文用到的深度学习框架为Tensorflow2,Opencv等等!使用的数据集共有61种类别,分别代表不同的虫害类别。使用的网络模型为moblienetv3.

Bi 设 Dai 坐

效果视频如下所示:

农作物虫害图像识别

搭建mobilenetv3模型

代码如下所示:

# 根据tf.keras的官方代码修改的mobilenetv3的网络模型
import tensorflow as tf
from keras import layers, models

"""
    Reference:
    - [Searching for MobileNetV3](https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)
    The following table describes the performance of MobileNets v3:
    ------------------------------------------------------------------------
    MACs stands for Multiply Adds
    |Classification Checkpoint|MACs(M)|Parameters(M)|Top1 Accuracy|Pixel1 CPU(ms)|
    |---|---|---|---|---|
    | mobilenet_v3_large_1.0_224              | 217 | 5.4 |   75.6   |   51.2  |
    | mobilenet_v3_large_0.75_224             | 155 | 4.0 |   73.3   |   39.8  |
    | mobilenet_v3_large_minimalistic_1.0_224 | 209 | 3.9 |   72.3   |   44.1  |
    | mobilenet_v3_small_1.0_224              | 66  | 2.9 |   68.1   |   15.8  |
    | mobilenet_v3_small_0.75_224             | 44  | 2.4 |   65.4   |   12.8  |
    | mobilenet_v3_small_minimalistic_1.0_224 | 65  | 2.0 |   61.9   |   12.2  |
    For image classification use cases, see
    [this page for detailed examples](https://keras.io/api/applications/#usage-examples-for-image-classification-models).
    For transfer learning use cases, make sure to read the
    [guide to transfer learning & fine-tuning](https://keras.io/guides/transfer_learning/).
"""


##################################################################################################################################
# 定义V3的完整模型 #################################################################################################################
##################################################################################################################################
def MobileNetV3(input_shape=[224, 224, 3], classes=1000, dropout_rate=0.2, alpha=1.0, weights=None,
                model_type='large', minimalistic=False, classifier_activation='softmax', include_preprocessing=False):
    # 如果有权重文件,那就意味着要迁移学习,那就意味着需要让BN层始终处于infer状态,否则解冻整个网络后,会出现acc下降loss上升的现象,终其原因是解冻网络之
    # 前,网络BN层用的是之前数据集的均值和方差,解冻后虽然维护着新的滑动平均和滑动方差,但是单次训练时使用的是当前batch的均值和方差,差异太大造成特征崩塌
    if weights:
        bn_training = False
    else:
        bn_training = None
    bn_decay = 0.99  # BN层的滑动平均系数,这个值的设置需要匹配steps和batchsize否则会出现奇怪现象
    # 确定通道所处维度
    channel_axis = -1
    # 根据是否为mini设置,修改部分配置参数
    if minimalistic:
        kernel = 3
        activation = relu
        se_ratio = None
        name = "mini"
    else:
        kernel = 5
        activation = hard_swish
        se_ratio = 0.25
        name = "norm"
    # 定义模型输入张量
    img_input = layers.Input(shape=input_shape)
    # 是否包含预处理层
    if include_preprocessing:
        x = layers.Rescaling(scale=1. / 127.5, offset=-1.)(img_input)
    else:
        x = img_input
    # 定义整个模型的第一个特征提取层
    x = layers.Conv2D(16, kernel_size=3, strides=(2, 2), padding='same', use_bias=False, name='Conv')(x)
    x = layers.BatchNormalization(axis=channel_axis, epsilon=1e-3, momentum=bn_decay, name='Conv/BatchNorm')(x,
                                                                                                             training=bn_training)
    x = activation(x)
    # 定义整个模型的骨干特征提取
    if model_type == 'large':
        x = MobileNetV3Large(x, kernel, activation, se_ratio, alpha, bn_training, bn_decay)
        last_point_ch = 1280
    else:
        x = MobileNetV3Small(x, kernel, activation, se_ratio, alpha, bn_training, bn_decay)
        last_point_ch = 1024
    # 定义整个模型的后特征提取
    last_conv_ch = _depth(x.shape[channel_axis] * 6)
    # if the width multiplier is greater than 1 we increase the number of output channels
    if alpha > 1.0:
        last_point_ch = _depth(last_point_ch * alpha)
    x = layers.Conv2D(last_conv_ch, kernel_size=1, padding='same', use_bias=False, name='Conv_1')(x)
    x = layers.BatchNormalization(axis=channel_axis, epsilon=1e-3, momentum=bn_decay, name='Conv_1/BatchNorm')(x,
                                                                                                               training=bn_training)
    x = activation(x)
    # 如果tf版本大于等于2.6则直接使用下面第一句就可以了,否则使用下面2~3句
    # x = layers.GlobalAveragePooling2D(data_format='channels_last', keepdims=True)(x)
    x = layers.GlobalAveragePooling2D(data_format='channels_last')(x)
    x = tf.expand_dims(tf.expand_dims(x, 1), 1)
    
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/140285
推荐阅读
相关标签
  

闽ICP备14008679号