当前位置:   article > 正文

【mmDetection框架解读】入门篇二、训练自己的数据集_mmdetection训练自己的数据

mmdetection训练自己的数据

一、准备自己的xml格式数据集

首先要准备一份xml配置的数据集,数据集打完标注文件如下:
在这里插入图片描述
ImageSets中暂为空,再执行【划分训练集、验证集和测试集】 ,划分后如下图:
在这里插入图片描述

再将xml格式数据集转为txt文件格式【数据集格式转换xml2txt】
在这里插入图片描述
整合成最终数据集:
images是数据集的所有图片;labels是数据集的所有txt标签数据;train.txt是训练集的所有文件名;
在这里插入图片描述
数据集放在mm/datasets下, datasets与mmdetection-master代码同级:
在这里插入图片描述

二、txt格式转换为json格式

这里是将数据集重新组织为 COCO 格式(JSON格式)。官网:在自定义数据集上进行训练.

2.1、COCO数据集格式介绍

整体结构

{
    "images": [image],
    "annotations": [annotation],
    "categories": [category]
}
  • 1
  • 2
  • 3
  • 4
  • 5

images

images是包含多个image实例的数组 下面是一个image实例:

{
    "file_name": "文件名或文件路径",
    "height": 360,
    "width": 640,
    "id": 1  # image id
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

annotations

annotations是包含多个annotation实例的数组 下面是一个annotation实例:

annotation{
    "id": int,  # 标注id
    "image_id": int,  # 图片文件id
    "category_id": int,  # 类别id
    # "segmentation": RLE or [polygon],  # iscrowded=0 polygon格式 iscrowded=1 RLE格式
    "area": float,  # 标注区域的面积
    "bbox": [x, y, width, height],  # bbox标注  xywh
    # "iscrowd": 0 or 1,  # iscrowd=0 单个的对象  iscrowd=1 一群对象(比如一群人)
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

categories

categories是包含多个categorie实例的数组 下面是一个categorie实例:

{
    "id": int,  # 类别id
    "name": str,  # 类别名
    # "supercategory": str,  # 类别父类  选填
}
  • 1
  • 2
  • 3
  • 4
  • 5

2.2、txt2json

【数据集格式转换txt2json】.用这个脚本将数据集格式从txt转为json格式(COCO格式)

转换后的数据集
在这里插入图片描述
imagesets到这一步往后其实没什么用了,可以删去也可以保留。

三、修改配置文件

下面修改相关配置文件,这里以faster-rcnn为例:

faster_rcnn_r50_fpn_1x_pest.py

# Config
# 1 model config
model = dict(
    type='FasterRCNN',
    backbone=dict(
        type='ResNet',
        depth=50,
        num_stages=4,
        out_indices=(0, 1, 2, 3),
        frozen_stages=1,
        norm_cfg=dict(type='BN', requires_grad=True),
        norm_eval=True,
        style='pytorch'),
    neck=dict(
        type='FPN',
        in_channels=[256, 512, 1024, 2048],
        out_channels=256,
        num_outs=5),
    rpn_head=dict(
        type='RPNHead',
        in_channels=256,
        feat_channels=256,
        anchor_generator=dict(
            type='AnchorGenerator',
            scales=[8],
            ratios=[0.5, 1.0, 2.0],
            strides=[4, 8, 16, 32, 64]),
        bbox_coder=dict(
            type='DeltaXYWHBBoxCoder',
            target_means=[0.0, 0.0, 0.0, 0.0],
            target_stds=[1.0, 1.0, 1.0, 1.0]),
        loss_cls=dict(
            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
        loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
    roi_head=dict(
        type='StandardRoIHead',
        bbox_roi_extractor=dict(
            type='SingleRoIExtractor',
            roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
            out_channels=256,
            featmap_strides=[4, 8, 16, 32]),
        bbox_head=dict(
            type='Shared2FCBBoxHead',
            in_channels=256,
            fc_out_channels=1024,
            roi_feat_size=7,
            num_classes=7,  # modify 1: dataset class num
            bbox_coder=dict(
                type='DeltaXYWHBBoxCoder',
                target_means=[0.0, 0.0, 0.0, 0.0],
                target_stds=[0.1, 0.1, 0.2, 0.2]),
            reg_class_agnostic=False,
            loss_cls=dict(
                type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
            loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
    train_cfg=dict(
        rpn=dict(
            assigner=dict(
                type='MaxIoUAssigner',
                pos_iou_thr=0.7,
                neg_iou_thr=0.3,
                min_pos_iou=0.3,
                match_low_quality=True,
                ignore_iof_thr=-1),
            sampler=dict(
                type='RandomSampler',
                num=256,
                pos_fraction=0.5,
                neg_pos_ub=-1,
                add_gt_as_proposals=False),
            allowed_border=-1,
            pos_weight=-1,
            debug=False),
        rpn_proposal=dict(
            nms_pre=2000,
            max_per_img=1000,
            nms=dict(type='nms', iou_threshold=0.7),
            min_bbox_size=0),
        rcnn=dict(
            assigner=dict(
                type='MaxIoUAssigner',
                pos_iou_thr=0.5,
                neg_iou_thr=0.5,
                min_pos_iou=0.5,
                match_low_quality=False,
                ignore_iof_thr=-1),
            sampler=dict(
                type='RandomSampler',
                num=512,
                pos_fraction=0.25,
                neg_pos_ub=-1,
                add_gt_as_proposals=True),
            pos_weight=-1,
            debug=False)),
    test_cfg=dict(
        rpn=dict(
            nms_pre=1000,
            max_per_img=1000,
            nms=dict(type='nms', iou_threshold=0.7),
            min_bbox_size=0),
        rcnn=dict(
            score_thr=0.05,
            nms=dict(type='nms', iou_threshold=0.5),
            max_per_img=100)))

# 2 pipeline config
img_norm_cfg = dict(
    mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='LoadAnnotations', with_bbox=True),
    dict(type='Resize', img_scale=(1280, 640), keep_ratio=True),  # modify img max size and min size
    dict(type='RandomFlip', flip_ratio=0.5),
    dict(
        type='Normalize',
        mean=[123.675, 116.28, 103.53],
        std=[58.395, 57.12, 57.375],
        to_rgb=True),
    dict(type='Pad', size_divisor=32),
    dict(type='DefaultFormatBundle'),
    dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(1280, 640),  # modify by yourself img max size and min size
        flip=False,
        transforms=[
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip'),
            dict(
                type='Normalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='Pad', size_divisor=32),
            dict(type='ImageToTensor', keys=['img']),
            dict(type='Collect', keys=['img'])
        ])
]

# 3 dataset config
metainfo= {
	'classes': (brown_spot', 'leaf_miner', 'paraleyrodes_pseudonaranjae_martin', 'papilio_polytes', 'chlorococcum', 'canker', 'dark_mildew',)
}
# modify 2: dataset classes
data = dict(
    samples_per_gpu=2,  # batch_size = samples_per_gpu*gpu_num
    workers_per_gpu=2,  # numworks = workers_per_gpu*gpu_num
    train=dict(
        type='CocoDataset',  # modify 3: dataset type
        classes=classes,     # modify 2: dataset classes
        data_root='I:\Miniconda\datasets\pest',  # modify 4: dataset root
        ann_file='train.json',  # modify 5: dataset json annotation
        img_prefix='images',    # modify 6 dataset image prefix
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(type='LoadAnnotations', with_bbox=True),
            dict(type='Resize', img_scale=(1280, 640), keep_ratio=True),  # modify 7: img max size and min size
            dict(type='RandomFlip', flip_ratio=0.5),
            dict(
                type='Normalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='Pad', size_divisor=32),
            dict(type='DefaultFormatBundle'),
            dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
        ]),
    val=dict(
        type='CocoDataset',  # same with train
        classes=classes,     # same with train
        data_root='I:\Miniconda\datasets\pest',  # same with train
        ann_file='val.json',   # same with train
        img_prefix='images',  # same with train
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(1280, 1280),     # same with train
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip'),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='Pad', size_divisor=32),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='Collect', keys=['img'])
                ])
        ]),
    test=dict(
        type='CocoDataset',  # same with train
        classes=classes,     # same with train
        data_root='I:\Miniconda\datasets\pest',  # same with train
        ann_file='test.json',   # same with train
        img_prefix='images',  # same with train
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(1280, 1280),  # same with train
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip'),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='Pad', size_divisor=32),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='Collect', keys=['img'])
                ])
        ]))
# 4 other config
# lr default 8 GPU = 0.02    1 GPU = 0.02/8
optimizer = dict(type='SGD', lr=0.02/8, momentum=0.9, weight_decay=0.0001)   # modify 8: lr change
optimizer_config = dict(grad_clip=None)
lr_config = dict(
    policy='step',
    warmup='linear',
    warmup_iters=500,
    warmup_ratio=0.001,
    step=[8, 11])
runner = dict(type='EpochBasedRunner', max_epochs=5)
checkpoint_config = dict(interval=1)  # save checkpoint per interval
log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')])  # print log per 5 interval
custom_hooks = [dict(type='NumClassCheckHook')]
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = "checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth"  # 9: modify checkpoints root
resume_from = None
workflow = [('train', 1)]

# train: python tools/train.py tests/test_train/faster_rcnn_r50_fpn_1x_pest.py --work-dir work_dir

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242

注意:配置文件中不能有中文字符

四、开始训练

文件目录:
在这里插入图片描述

单GPU训练指令:

python tools/train.py <config_file> --gpus <gpu_id> --work_dir <work_dir>

example:

python tools/train.py mycode/train_example/pest/faster_rcnn_r50_fpn_1x_pest.py --work-dir mycode/train_example/work_dir

多GPU训练指令:

tools/dist_train.sh <config_file> gpu_num --validate

example:

tools/dist_train.sh mycode/train_example/pest/faster_rcnn_r50_fpn_1x_pest.py 4 --validate

–validate: perform evaluation every k (default=1) epochs during the training.

如下图成功开始训练:
在这里插入图片描述

Reference

csdn:mmdetection训练自己的数据.

github Docs > 2: 在自定义数据集上进行训练.

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/weixin_40725706/article/detail/679140
推荐阅读
相关标签
  

闽ICP备14008679号