当前位置:   article > 正文

【YOLO改进】换遍主干网络之CVPR2024 微软新作StarNet:超强轻量级Backbone(基于MMYOLO)_starnet cvpr

starnet cvpr

StarNet

论文链接:[2403.19967] Rewrite the Stars

github仓库:GitHub - ma-xu/Rewrite-the-Stars: [CVPR 2024] Rewrite the Stars

CVPR2024 Rewrite the Stars论文揭示了star operation(元素乘法)在无需加宽网络下,将输入映射到高维非线性特征空间的能力。基于此提出了StarNet,在紧凑的网络结构和较低的能耗下展示了令人印象深刻的性能和低延迟。

优势 (Advantages)

  1. 高维和非线性特征变换 (High-Dimensional and Non-Linear Feature Transformation)

    • StarNet通过星操作(star operation)实现高维和非线性特征空间的映射,而无需增加计算复杂度。与传统的内核技巧(kernel tricks)类似,星操作能够在低维输入中隐式获得高维特征​ (ar5iv)​。
    • 对于YOLO系列网络,这意味着在保持计算效率的同时,能够获得更丰富和表达力更强的特征表示,这对于目标检测任务中的精细特征捕获尤为重要。
  2. 高效网络设计 (Efficient Network Design)

    • StarNet通过星操作实现了高效的特征表示,无需复杂的网络设计和额外的计算开销。其独特的能力在于能够在低维空间中执行计算,但隐式地考虑极高维的特征​ (ar5iv)​。
    • 这使得StarNet可以作为YOLO系列网络的主干,提供高效的计算和更好的特征表示,有助于在资源受限的环境中实现更高的检测性能。
  3. 多层次隐式特征扩展 (Multi-Layer Implicit Feature Expansion)

    • 通过多层星操作,StarNet能够递归地增加隐式特征维度,接近无限维度。对于具有较大宽度和深度的网络,这种特性可以显著增强特征的表达能力​ (ar5iv)​。
    • 对于YOLO系列网络,这意味着可以通过适当的深度和宽度设计,显著提高特征提取的质量,从而提升目标检测的准确性。

解决的问题 (Problems Addressed)

  1. 计算复杂度与性能的平衡 (Balance Between Computational Complexity and Performance)

    • StarNet通过星操作在保持计算复杂度较低的同时,实现了高维特征空间的映射。这解决了传统高效网络设计中计算复杂度与性能之间的权衡问题​ (ar5iv)​。
    • YOLO系列网络需要在实时性和检测精度之间找到平衡,StarNet的高效特性正好契合这一需求。
  2. 特征表示的丰富性 (Richness of Feature Representation)

    • 传统卷积网络在特征表示的高维非线性变换上存在一定局限性,而StarNet通过星操作实现了更丰富的特征表示​ (ar5iv)​。
    • 在目标检测任务中,特别是对于小目标和复杂场景,丰富的特征表示能够显著提升检测效果,使得YOLO系列网络在这些场景中表现更佳。
  3. 简化网络设计 (Simplified Network Design)

    • StarNet通过星操作提供了一种简化网络设计的方法,无需复杂的特征融合和多分支设计就能实现高效的特征表示​ (ar5iv)​。
    • 对于YOLO系列网络,这意味着可以更容易地设计和实现高效的主干网络,降低设计和调试的复杂度。

在MMYOLO中将StarNet替换成yolov5的主干网络

1. 在上文提到的仓库中下载imagenet/starnet.py

2. 修改starnet.py中的forward函数,并且添加out_dices参数使其能够输出不同stage的特征向量

3. 将class StarNet注册并且在__init__()函数中进行修改

4. 修改配置文件,主要是调整YOLOv5 neck和head的输入输出通道数

修改后的starnet.py

  1. """
  2. Implementation of Prof-of-Concept Network: StarNet.
  3. We make StarNet as simple as possible [to show the key contribution of element-wise multiplication]:
  4. - like NO layer-scale in network design,
  5. - and NO EMA during training,
  6. - which would improve the performance further.
  7. Created by: Xu Ma (Email: ma.xu1@northeastern.edu)
  8. Modified Date: Mar/29/2024
  9. """
  10. import torch
  11. import torch.nn as nn
  12. from timm.models.layers import DropPath, trunc_normal_
  13. from typing import List, Sequence, Union
  14. # from timm.models.registry import register_model
  15. from mmyolo.registry import MODELS
  16. model_urls = {
  17. "starnet_s1": "https://github.com/ma-xu/Rewrite-the-Stars/releases/download/checkpoints_v1/starnet_s1.pth.tar",
  18. "starnet_s2": "https://github.com/ma-xu/Rewrite-the-Stars/releases/download/checkpoints_v1/starnet_s2.pth.tar",
  19. "starnet_s3": "https://github.com/ma-xu/Rewrite-the-Stars/releases/download/checkpoints_v1/starnet_s3.pth.tar",
  20. "starnet_s4": "https://github.com/ma-xu/Rewrite-the-Stars/releases/download/checkpoints_v1/starnet_s4.pth.tar",
  21. }
  22. class ConvBN(torch.nn.Sequential):
  23. def __init__(self, in_planes, out_planes, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, with_bn=True):
  24. super().__init__()
  25. self.add_module('conv', torch.nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, dilation, groups))
  26. if with_bn:
  27. self.add_module('bn', torch.nn.BatchNorm2d(out_planes))
  28. torch.nn.init.constant_(self.bn.weight, 1)
  29. torch.nn.init.constant_(self.bn.bias, 0)
  30. class Block(nn.Module):
  31. def __init__(self, dim, mlp_ratio=3, drop_path=0.):
  32. super().__init__()
  33. self.dwconv = ConvBN(dim, dim, 7, 1, (7 - 1) // 2, groups=dim, with_bn=True)
  34. self.f1 = ConvBN(dim, mlp_ratio * dim, 1, with_bn=False)
  35. self.f2 = ConvBN(dim, mlp_ratio * dim, 1, with_bn=False)
  36. self.g = ConvBN(mlp_ratio * dim, dim, 1, with_bn=True)
  37. self.dwconv2 = ConvBN(dim, dim, 7, 1, (7 - 1) // 2, groups=dim, with_bn=False)
  38. self.act = nn.ReLU6()
  39. self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
  40. def forward(self, x):
  41. input = x
  42. x = self.dwconv(x)
  43. x1, x2 = self.f1(x), self.f2(x)
  44. x = self.act(x1) * x2
  45. x = self.dwconv2(self.g(x))
  46. x = input + self.drop_path(x)
  47. return x
  48. @MODELS.register_module()
  49. class StarNet(nn.Module):
  50. def __init__(self, base_dim=32, out_indices: Sequence[int] = (0, 1, 2), depths=[3, 3, 12, 5], mlp_ratio=4,
  51. drop_path_rate=0.0, num_classes=1000, **kwargs):
  52. super().__init__()
  53. self.num_classes = num_classes
  54. self.in_channel = 32
  55. self.out_indices = out_indices
  56. self.depths = depths
  57. # stem layer
  58. self.stem = nn.Sequential(ConvBN(3, self.in_channel, kernel_size=3, stride=2, padding=1), nn.ReLU6())
  59. dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth
  60. # build stages
  61. self.stages = nn.ModuleList()
  62. cur = 0
  63. for i_layer in range(len(depths)):
  64. embed_dim = base_dim * 2 ** i_layer
  65. down_sampler = ConvBN(self.in_channel, embed_dim, 3, 2, 1)
  66. self.in_channel = embed_dim
  67. blocks = [Block(self.in_channel, mlp_ratio, dpr[cur + i]) for i in range(depths[i_layer])]
  68. cur += depths[i_layer]
  69. self.stages.append(nn.Sequential(down_sampler, *blocks))
  70. # head
  71. # self.norm = nn.BatchNorm2d(self.in_channel)
  72. # self.avgpool = nn.AdaptiveAvgPool2d(1)
  73. # self.head = nn.Linear(self.in_channel, num_classes)
  74. # self.apply(self._init_weights)
  75. def _init_weights(self, m):
  76. if isinstance(m, nn.Linear or nn.Conv2d):
  77. trunc_normal_(m.weight, std=.02)
  78. if isinstance(m, nn.Linear) and m.bias is not None:
  79. nn.init.constant_(m.bias, 0)
  80. elif isinstance(m, nn.LayerNorm or nn.BatchNorm2d):
  81. nn.init.constant_(m.bias, 0)
  82. nn.init.constant_(m.weight, 1.0)
  83. def forward(self, x):
  84. x = self.stem(x)
  85. ##记录stage的输出
  86. outs = []
  87. for i in range(len(self.depths)):
  88. x = self.stages[i](x)
  89. if i in self.out_indices:
  90. outs.append(x)
  91. return tuple(outs)
  92. @MODELS.register_module()
  93. def starnet_s1(pretrained=False, **kwargs):
  94. model = StarNet(24, (0, 1, 2), [2, 2, 8, 3], **kwargs)
  95. if pretrained:
  96. url = model_urls['starnet_s1']
  97. checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu")
  98. model.load_state_dict(checkpoint["state_dict"])
  99. return model
  100. @MODELS.register_module()
  101. def starnet_s2(pretrained=False, **kwargs):
  102. model = StarNet(32, (0, 1, 2), [1, 2, 6, 2], **kwargs)
  103. if pretrained:
  104. url = model_urls['starnet_s2']
  105. checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu")
  106. model.load_state_dict(checkpoint["state_dict"])
  107. return model
  108. @MODELS.register_module()
  109. def starnet_s3(pretrained=False, **kwargs):
  110. model = StarNet(32, (0, 1, 2), [2, 2, 8, 4], **kwargs)
  111. if pretrained:
  112. url = model_urls['starnet_s3']
  113. checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu")
  114. model.load_state_dict(checkpoint["state_dict"])
  115. return model
  116. @MODELS.register_module()
  117. def starnet_s4(pretrained=False, **kwargs):
  118. model = StarNet(32, (0, 1, 2), [3, 3, 12, 5], **kwargs)
  119. if pretrained:
  120. url = model_urls['starnet_s4']
  121. checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu")
  122. model.load_state_dict(checkpoint["state_dict"])
  123. return model
  124. # very small networks #
  125. @MODELS.register_module()
  126. def starnet_s050(pretrained=False, **kwargs):
  127. return StarNet(16, (0, 1, 2), [1, 1, 3, 1], 3, **kwargs)
  128. @MODELS.register_module()
  129. def starnet_s100(pretrained=False, **kwargs):
  130. return StarNet(20, (0, 1, 2), [1, 2, 4, 1], 4, **kwargs)
  131. @MODELS.register_module()
  132. def starnet_s150(pretrained=False, **kwargs):
  133. return StarNet(24, (0, 1, 2), [1, 2, 4, 2], 3, **kwargs)
  134. if __name__ == '__main__':
  135. model = StarNet()
  136. input_tensor = torch.randn(1, 3, 224, 224)
  137. outputs = model(input_tensor)

修改后的__init__.py

  1. # Copyright (c) OpenMMLab. All rights reserved.
  2. from .base_backbone import BaseBackbone
  3. from .csp_darknet import YOLOv5CSPDarknet, YOLOv8CSPDarknet, YOLOXCSPDarknet
  4. from .csp_resnet import PPYOLOECSPResNet
  5. from .cspnext import CSPNeXt
  6. from .efficient_rep import YOLOv6CSPBep, YOLOv6EfficientRep
  7. from .yolov7_backbone import YOLOv7Backbone
  8. from .starnet import StarNet
  9. __all__ = [
  10. 'YOLOv5CSPDarknet', 'BaseBackbone', 'YOLOv6EfficientRep', 'YOLOv6CSPBep',
  11. 'YOLOXCSPDarknet', 'CSPNeXt', 'YOLOv7Backbone', 'PPYOLOECSPResNet',
  12. 'YOLOv8CSPDarknet','StarNet'
  13. ]

修改后的配置文件(以yolov5_s-v61_syncbn_8xb16-300e_coco.py为例子)

  1. _base_ = ['../_base_/default_runtime.py', '../_base_/det_p5_tta.py']
  2. # ========================Frequently modified parameters======================
  3. # -----data related-----
  4. data_root = 'data/coco/' # Root path of data
  5. # Path of train annotation file
  6. train_ann_file = 'annotations/instances_train2017.json'
  7. train_data_prefix = 'train2017/' # Prefix of train image path
  8. # Path of val annotation file
  9. val_ann_file = 'annotations/instances_val2017.json'
  10. val_data_prefix = 'val2017/' # Prefix of val image path
  11. num_classes = 80 # Number of classes for classification
  12. # Batch size of a single GPU during training
  13. train_batch_size_per_gpu = 16
  14. # Worker to pre-fetch data for each single GPU during training
  15. train_num_workers = 8
  16. # persistent_workers must be False if num_workers is 0
  17. persistent_workers = True
  18. # -----model related-----
  19. # Basic size of multi-scale prior box
  20. anchors = [
  21. [(10, 13), (16, 30), (33, 23)], # P3/8
  22. [(30, 61), (62, 45), (59, 119)], # P4/16
  23. [(116, 90), (156, 198), (373, 326)] # P5/32
  24. ]
  25. # -----train val related-----
  26. # Base learning rate for optim_wrapper. Corresponding to 8xb16=128 bs
  27. base_lr = 0.01
  28. max_epochs = 300 # Maximum training epochs
  29. model_test_cfg = dict(
  30. # The config of multi-label for multi-class prediction.
  31. multi_label=True,
  32. # The number of boxes before NMS
  33. nms_pre=30000,
  34. score_thr=0.001, # Threshold to filter out boxes.
  35. nms=dict(type='nms', iou_threshold=0.65), # NMS type and threshold
  36. max_per_img=300) # Max number of detections of each image
  37. # ========================Possible modified parameters========================
  38. # -----data related-----
  39. img_scale = (640, 640) # width, height
  40. # Dataset type, this will be used to define the dataset
  41. dataset_type = 'YOLOv5CocoDataset'
  42. # Batch size of a single GPU during validation
  43. val_batch_size_per_gpu = 1
  44. # Worker to pre-fetch data for each single GPU during validation
  45. val_num_workers = 2
  46. # Config of batch shapes. Only on val.
  47. # It means not used if batch_shapes_cfg is None.
  48. batch_shapes_cfg = dict(
  49. type='BatchShapePolicy',
  50. batch_size=val_batch_size_per_gpu,
  51. img_size=img_scale[0],
  52. # The image scale of padding should be divided by pad_size_divisor
  53. size_divisor=32,
  54. # Additional paddings for pixel scale
  55. extra_pad_ratio=0.5)
  56. # -----model related-----
  57. # The scaling factor that controls the depth of the network structure
  58. deepen_factor = 0.33
  59. # The scaling factor that controls the width of the network structure
  60. widen_factor = 0.5
  61. # Strides of multi-scale prior box
  62. strides = [8, 16, 32]
  63. num_det_layers = 3 # The number of model output scales
  64. norm_cfg = dict(type='BN', momentum=0.03, eps=0.001) # Normalization config
  65. # -----train val related-----
  66. affine_scale = 0.5 # YOLOv5RandomAffine scaling ratio
  67. loss_cls_weight = 0.5
  68. loss_bbox_weight = 0.05
  69. loss_obj_weight = 1.0
  70. prior_match_thr = 4. # Priori box matching threshold
  71. # The obj loss weights of the three output layers
  72. obj_level_weights = [4., 1., 0.4]
  73. lr_factor = 0.01 # Learning rate scaling factor
  74. weight_decay = 0.0005
  75. # Save model checkpoint and validation intervals
  76. save_checkpoint_intervals = 10
  77. # The maximum checkpoints to keep.
  78. max_keep_ckpts = 3
  79. # Single-scale training is recommended to
  80. # be turned on, which can speed up training.
  81. env_cfg = dict(cudnn_benchmark=True)
  82. '''
  83. starnet_channel,base_dim,depths,mlp_ratio
  84. s1:24,[48, 96, 192],[2, 2, 8, 3],4
  85. s2:32,[64, 128, 256],[1, 2, 6, 2],4
  86. s3:32,[64, 128, 256],[2, 2, 8, 4],4
  87. s4:32,[64, 128, 256],[3, 3, 12, 5],4
  88. starnet_s050:16,[32,64,128],[1, 1, 3, 1],3
  89. starnet_s0100:20,[40, 80, 120],[1, 2, 4, 1],4
  90. starnet_s150:24,[48, 96, 192],[1, 2, 4, 2],3
  91. '''
  92. starnet_channel=[48, 96, 192]
  93. depths=[1, 2, 6, 2]
  94. # ===============================Unmodified in most cases====================
  95. model = dict(
  96. type='YOLODetector',
  97. data_preprocessor=dict(
  98. type='mmdet.DetDataPreprocessor',
  99. mean=[0., 0., 0.],
  100. std=[255., 255., 255.],
  101. bgr_to_rgb=True),
  102. backbone=dict(
  103. ##s1
  104. type='StarNet',
  105. base_dim=24,
  106. out_indices=(0,1,2),
  107. depths=depths,
  108. mlp_ratio=4,
  109. num_classes=num_classes,
  110. # deepen_factor=deepen_factor,
  111. # widen_factor=widen_factor,
  112. # norm_cfg=norm_cfg,
  113. # act_cfg=dict(type='SiLU', inplace=True)
  114. ),
  115. neck=dict(
  116. type='YOLOv5PAFPN',
  117. deepen_factor=deepen_factor,
  118. widen_factor=widen_factor,
  119. in_channels=starnet_channel,
  120. out_channels=starnet_channel,
  121. num_csp_blocks=3,
  122. norm_cfg=norm_cfg,
  123. act_cfg=dict(type='SiLU', inplace=True)),
  124. bbox_head=dict(
  125. type='YOLOv5Head',
  126. head_module=dict(
  127. type='YOLOv5HeadModule',
  128. num_classes=num_classes,
  129. in_channels=starnet_channel,
  130. widen_factor=widen_factor,
  131. featmap_strides=strides,
  132. num_base_priors=3),
  133. prior_generator=dict(
  134. type='mmdet.YOLOAnchorGenerator',
  135. base_sizes=anchors,
  136. strides=strides),
  137. # scaled based on number of detection layers
  138. loss_cls=dict(
  139. type='mmdet.CrossEntropyLoss',
  140. use_sigmoid=True,
  141. reduction='mean',
  142. loss_weight=loss_cls_weight *
  143. (num_classes / 80 * 3 / num_det_layers)),
  144. # 修改此处实现IoU损失函数的替换
  145. loss_bbox=dict(
  146. type='IoULoss',
  147. focal=True,
  148. iou_mode='ciou',
  149. bbox_format='xywh',
  150. eps=1e-7,
  151. reduction='mean',
  152. loss_weight=loss_bbox_weight * (3 / num_det_layers),
  153. return_iou=True),
  154. loss_obj=dict(
  155. type='mmdet.CrossEntropyLoss',
  156. use_sigmoid=True,
  157. reduction='mean',
  158. loss_weight=loss_obj_weight *
  159. ((img_scale[0] / 640) ** 2 * 3 / num_det_layers)),
  160. prior_match_thr=prior_match_thr,
  161. obj_level_weights=obj_level_weights),
  162. test_cfg=model_test_cfg)
  163. albu_train_transforms = [
  164. dict(type='Blur', p=0.01),
  165. dict(type='MedianBlur', p=0.01),
  166. dict(type='ToGray', p=0.01),
  167. dict(type='CLAHE', p=0.01)
  168. ]
  169. pre_transform = [
  170. dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args),
  171. dict(type='LoadAnnotations', with_bbox=True)
  172. ]
  173. train_pipeline = [
  174. *pre_transform,
  175. dict(
  176. type='Mosaic',
  177. img_scale=img_scale,
  178. pad_val=114.0,
  179. pre_transform=pre_transform),
  180. dict(
  181. type='YOLOv5RandomAffine',
  182. max_rotate_degree=0.0,
  183. max_shear_degree=0.0,
  184. scaling_ratio_range=(1 - affine_scale, 1 + affine_scale),
  185. # img_scale is (width, height)
  186. border=(-img_scale[0] // 2, -img_scale[1] // 2),
  187. border_val=(114, 114, 114)),
  188. dict(
  189. type='mmdet.Albu',
  190. transforms=albu_train_transforms,
  191. bbox_params=dict(
  192. type='BboxParams',
  193. format='pascal_voc',
  194. label_fields=['gt_bboxes_labels', 'gt_ignore_flags']),
  195. keymap={
  196. 'img': 'image',
  197. 'gt_bboxes': 'bboxes'
  198. }),
  199. dict(type='YOLOv5HSVRandomAug'),
  200. dict(type='mmdet.RandomFlip', prob=0.5),
  201. dict(
  202. type='mmdet.PackDetInputs',
  203. meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip',
  204. 'flip_direction'))
  205. ]
  206. train_dataloader = dict(
  207. batch_size=train_batch_size_per_gpu,
  208. num_workers=train_num_workers,
  209. persistent_workers=persistent_workers,
  210. pin_memory=True,
  211. sampler=dict(type='DefaultSampler', shuffle=True),
  212. dataset=dict(
  213. type=dataset_type,
  214. data_root=data_root,
  215. ann_file=train_ann_file,
  216. data_prefix=dict(img=train_data_prefix),
  217. filter_cfg=dict(filter_empty_gt=False, min_size=32),
  218. pipeline=train_pipeline))
  219. test_pipeline = [
  220. dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args),
  221. dict(type='YOLOv5KeepRatioResize', scale=img_scale),
  222. dict(
  223. type='LetterResize',
  224. scale=img_scale,
  225. allow_scale_up=False,
  226. pad_val=dict(img=114)),
  227. dict(type='LoadAnnotations', with_bbox=True, _scope_='mmdet'),
  228. dict(
  229. type='mmdet.PackDetInputs',
  230. meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
  231. 'scale_factor', 'pad_param'))
  232. ]
  233. val_dataloader = dict(
  234. batch_size=val_batch_size_per_gpu,
  235. num_workers=val_num_workers,
  236. persistent_workers=persistent_workers,
  237. pin_memory=True,
  238. drop_last=False,
  239. sampler=dict(type='DefaultSampler', shuffle=False),
  240. dataset=dict(
  241. type=dataset_type,
  242. data_root=data_root,
  243. test_mode=True,
  244. data_prefix=dict(img=val_data_prefix),
  245. ann_file=val_ann_file,
  246. pipeline=test_pipeline,
  247. batch_shapes_cfg=batch_shapes_cfg))
  248. test_dataloader = val_dataloader
  249. param_scheduler = None
  250. optim_wrapper = dict(
  251. type='OptimWrapper',
  252. optimizer=dict(
  253. type='SGD',
  254. lr=base_lr,
  255. momentum=0.937,
  256. weight_decay=weight_decay,
  257. nesterov=True,
  258. batch_size_per_gpu=train_batch_size_per_gpu),
  259. constructor='YOLOv5OptimizerConstructor')
  260. default_hooks = dict(
  261. param_scheduler=dict(
  262. type='YOLOv5ParamSchedulerHook',
  263. scheduler_type='linear',
  264. lr_factor=lr_factor,
  265. max_epochs=max_epochs),
  266. checkpoint=dict(
  267. type='CheckpointHook',
  268. interval=save_checkpoint_intervals,
  269. save_best='auto',
  270. max_keep_ckpts=max_keep_ckpts))
  271. custom_hooks = [
  272. dict(
  273. type='EMAHook',
  274. ema_type='ExpMomentumEMA',
  275. momentum=0.0001,
  276. update_buffers=True,
  277. strict_load=False,
  278. priority=49)
  279. ]
  280. val_evaluator = dict(
  281. type='mmdet.CocoMetric',
  282. proposal_nums=(100, 1, 10),
  283. ann_file=data_root + val_ann_file,
  284. metric='bbox')
  285. test_evaluator = val_evaluator
  286. train_cfg = dict(
  287. type='EpochBasedTrainLoop',
  288. max_epochs=max_epochs,
  289. val_interval=save_checkpoint_intervals)
  290. val_cfg = dict(type='ValLoop')
  291. test_cfg = dict(type='TestLoop')

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/weixin_40725706/article/detail/727515
推荐阅读
相关标签
  

闽ICP备14008679号