赞
踩
(1)验证mmdetection是否安装完毕:
python demo/image_demo.py demo/demo.jpg rtmdet_tiny_8xb32-300e_coco.py --weights rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --device cpu
(2)运行结果:
(1)阅读config(D:\mmdetection-main\projects\ConvNeXt-V2\configs\mask-rcnn_convnext-v2-b_fpn_lsj-3x-fcmae_coco.py)中的base:maskrcnn-r50(网络模型),coco(数据集),schedule(学习率参数等),default_runtime
打开每个.py文件:
(2)更改部分参数:
注:需要安装mmpretrain
更改1:下载预训练权重:(下载到根目录下,然后更改路径)
更改前:
checkpoint_file = 'https://download.openmmlab.com/mmclassification/v0/convnext-v2/convnext-v2-base_3rdparty-fcmae_in1k_20230104-8a798eaf.pth'
更改后:
checkpoint_file = r'D:\mmdetection-main\convnext-v2-base_3rdparty-fcmae_in1k_20230104-8a798eaf.pth'
更改2: image_size 为自己的图像分辨率,如:
image_size = (512, 512)
更改3:train_dataloader中的batch_size和num_workers的数量(根据电脑的显卡配置和显存容量配置,本文设置为2,2)
- train_dataloader = dict(
- batch_size=2, # total_batch_size 32 = 8 GPUS x 4 images
- num_workers=2,
- dataset=dict(pipeline=train_pipeline))
更改4:打开mask-rcnn_r50_fpn.py文件,将其中的num_classes=X,改为自己数据集中的X个类
第一处:
第二处:
更改5:打开coco_instance.py文件,重写scale为自己数据集的分辨率
更改6:训练数据换成自己的数据集地址
- train_dataloader = dict(
- batch_size=2,
- num_workers=2,
- persistent_workers=True,
- sampler=dict(type='DefaultSampler', shuffle=True),
- batch_sampler=dict(type='AspectRatioBatchSampler'),
- dataset=dict(
- type=dataset_type,
- #data_root=data_root,
- #ann_file='annotations/instances_train2017.json',
- ann_file=r'E:\mmdetection-main\jyzData\coco\annotations\instances_train2017.json',#!!!
- data_prefix=dict(img=r'E:\mmdetection-main\jyzData\coco\images\train2017/'),#!!!
- filter_cfg=dict(filter_empty_gt=True, min_size=32),
- pipeline=train_pipeline,
- backend_args=backend_args))
-
- val_dataloader = dict(
- batch_size=1,
- num_workers=2,
- persistent_workers=True,
- drop_last=False,
- sampler=dict(type='DefaultSampler', shuffle=False),
- dataset=dict(
- type=dataset_type,
- #data_root=data_root,
- ann_file=r'E:\mmdetection-main\jyzData\coco\annotations\instances_val2017.json',#!!!
- data_prefix=dict(img=r'E:\mmdetection-main\jyzData\coco\images\val2017/'),#!!!
- test_mode=True,
- pipeline=test_pipeline,
- backend_args=backend_args))
- test_dataloader = val_dataloader
-
- val_evaluator = dict(
- type='CocoMetric',
- #ann_file=data_root + 'annotations/instances_val2017.json',
- ann_file=r'E:\mmdetection-main\jyzData\coco\annotations\instances_val2017.json',#!!!
- metric=['bbox', 'segm'],
- format_only=False,
- backend_args=backend_args)
- test_evaluator = val_evaluator
-
- # inference on test dataset and
- # format the output results for submission.
- test_dataloader = dict(
- batch_size=1,
- num_workers=2,
- persistent_workers=True,
- drop_last=False,
- sampler=dict(type='DefaultSampler', shuffle=False),
- dataset=dict(
- type=dataset_type,
- #data_root=data_root,
- ann_file=r'E:\mmdetection-main\jyzData\coco\annotations\instances_val2017.json',#!!!
- data_prefix=dict(img=r'E:\mmdetection-main\jyzData\coco\images\val2017/'),#!!!
- test_mode=True,
- pipeline=test_pipeline))
- test_evaluator = dict(
- type='CocoMetric',
- metric=['bbox', 'segm'],
- format_only=True,
- ann_file=r'E:\mmdetection-main\jyzData\coco\annotations\instances_val2017.json',#!!!
- outfile_prefix='./work_dirs/coco_instance/test')#!!!
更改7:打开D:\mmdetection-main\mmdet\datasets\coco.py文件,更改类
- METAINFO = {
- 'classes':
- ('jyz', 'jyzB'),#更改
- # palette is a list of color tuples, which is used for visualization.
- 'palette':
- [(220, 20, 60), (119, 11, 32)]
- }
更改8:开始训练
python E:\mmdetection-main\tools\train.py E:\mmdetection-main\projects\ConvNeXt-V2\configs\mask-rcnn_convnext-v2-b_fpn_lsj-3x-fcmae_coco.py
一切完成,开始训练:
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。