当前位置:   article > 正文

mmdetection3.x如何用自己的数据集进行训练_mmdetection训练自己的数据

mmdetection训练自己的数据

1.前提(已经安装好mmdetection以及相关环境)

(1)验证mmdetection是否安装完毕:

python demo/image_demo.py demo/demo.jpg rtmdet_tiny_8xb32-300e_coco.py --weights rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --device cpu

(2)运行结果:

2.如何配置训练环境(本文以mmdetection-3.x中convnext-v2作为演示)

(1)阅读config(D:\mmdetection-main\projects\ConvNeXt-V2\configs\mask-rcnn_convnext-v2-b_fpn_lsj-3x-fcmae_coco.py)中的base:maskrcnn-r50(网络模型),coco(数据集),schedule(学习率参数等),default_runtime

打开每个.py文件:

(2)更改部分参数:

注:需要安装mmpretrain

更改1:下载预训练权重:(下载到根目录下,然后更改路径)

        更改前:

checkpoint_file = 'https://download.openmmlab.com/mmclassification/v0/convnext-v2/convnext-v2-base_3rdparty-fcmae_in1k_20230104-8a798eaf.pth'

        更改后:

checkpoint_file = r'D:\mmdetection-main\convnext-v2-base_3rdparty-fcmae_in1k_20230104-8a798eaf.pth' 

更改2: image_size 为自己的图像分辨率,如:

image_size = (512, 512)

更改3:train_dataloader中的batch_size和num_workers的数量(根据电脑的显卡配置和显存容量配置,本文设置为2,2)

  1. train_dataloader = dict(
  2. batch_size=2, # total_batch_size 32 = 8 GPUS x 4 images
  3. num_workers=2,
  4. dataset=dict(pipeline=train_pipeline))

更改4:打开mask-rcnn_r50_fpn.py文件,将其中的num_classes=X,改为自己数据集中的X个类

第一处: 

第二处:

更改5:打开coco_instance.py文件,重写scale为自己数据集的分辨率

更改6:训练数据换成自己的数据集地址

  1. train_dataloader = dict(
  2. batch_size=2,
  3. num_workers=2,
  4. persistent_workers=True,
  5. sampler=dict(type='DefaultSampler', shuffle=True),
  6. batch_sampler=dict(type='AspectRatioBatchSampler'),
  7. dataset=dict(
  8. type=dataset_type,
  9. #data_root=data_root,
  10. #ann_file='annotations/instances_train2017.json',
  11. ann_file=r'E:\mmdetection-main\jyzData\coco\annotations\instances_train2017.json',#!!!
  12. data_prefix=dict(img=r'E:\mmdetection-main\jyzData\coco\images\train2017/'),#!!!
  13. filter_cfg=dict(filter_empty_gt=True, min_size=32),
  14. pipeline=train_pipeline,
  15. backend_args=backend_args))
  16. val_dataloader = dict(
  17. batch_size=1,
  18. num_workers=2,
  19. persistent_workers=True,
  20. drop_last=False,
  21. sampler=dict(type='DefaultSampler', shuffle=False),
  22. dataset=dict(
  23. type=dataset_type,
  24. #data_root=data_root,
  25. ann_file=r'E:\mmdetection-main\jyzData\coco\annotations\instances_val2017.json',#!!!
  26. data_prefix=dict(img=r'E:\mmdetection-main\jyzData\coco\images\val2017/'),#!!!
  27. test_mode=True,
  28. pipeline=test_pipeline,
  29. backend_args=backend_args))
  30. test_dataloader = val_dataloader
  31. val_evaluator = dict(
  32. type='CocoMetric',
  33. #ann_file=data_root + 'annotations/instances_val2017.json',
  34. ann_file=r'E:\mmdetection-main\jyzData\coco\annotations\instances_val2017.json',#!!!
  35. metric=['bbox', 'segm'],
  36. format_only=False,
  37. backend_args=backend_args)
  38. test_evaluator = val_evaluator
  39. # inference on test dataset and
  40. # format the output results for submission.
  41. test_dataloader = dict(
  42. batch_size=1,
  43. num_workers=2,
  44. persistent_workers=True,
  45. drop_last=False,
  46. sampler=dict(type='DefaultSampler', shuffle=False),
  47. dataset=dict(
  48. type=dataset_type,
  49. #data_root=data_root,
  50. ann_file=r'E:\mmdetection-main\jyzData\coco\annotations\instances_val2017.json',#!!!
  51. data_prefix=dict(img=r'E:\mmdetection-main\jyzData\coco\images\val2017/'),#!!!
  52. test_mode=True,
  53. pipeline=test_pipeline))
  54. test_evaluator = dict(
  55. type='CocoMetric',
  56. metric=['bbox', 'segm'],
  57. format_only=True,
  58. ann_file=r'E:\mmdetection-main\jyzData\coco\annotations\instances_val2017.json',#!!!
  59. outfile_prefix='./work_dirs/coco_instance/test')#!!!

 更改7:打开D:\mmdetection-main\mmdet\datasets\coco.py文件,更改类

  1. METAINFO = {
  2. 'classes':
  3. ('jyz', 'jyzB'),#更改
  4. # palette is a list of color tuples, which is used for visualization.
  5. 'palette':
  6. [(220, 20, 60), (119, 11, 32)]
  7. }

更改8:开始训练

python E:\mmdetection-main\tools\train.py E:\mmdetection-main\projects\ConvNeXt-V2\configs\mask-rcnn_convnext-v2-b_fpn_lsj-3x-fcmae_coco.py

一切完成,开始训练:

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/菜鸟追梦旅行/article/detail/679137
推荐阅读
相关标签
  

闽ICP备14008679号