当前位置:   article > 正文

PySOT代码之SiamRPN++分析——训练_siamrpn++训练

siamrpn++训练

一、总流程:

  1. def main():
  2. rank, world_size = dist_init() # rank:进程ID world_size:进程数量/任务数量/GPU数量
  3. logger.info("init done")
  4. # load cfg
  5. cfg.merge_from_file(args.cfg)
  6. #父进程初始化log信息
  7. if rank == 0:
  8. if not os.path.exists(cfg.TRAIN.LOG_DIR):
  9. os.makedirs(cfg.TRAIN.LOG_DIR)
  10. init_log('global', logging.INFO)
  11. if cfg.TRAIN.LOG_DIR:
  12. add_file_handler('global',
  13. os.path.join(cfg.TRAIN.LOG_DIR, 'logs.txt'),
  14. logging.INFO)
  15. logger.info("Version Information: \n{}\n".format(commit()))
  16. logger.info("config \n{}".format(json.dumps(cfg, indent=4)))
  17. # 创建模型,在构造函数中分别调用了get_backbone/get_neck/get_rpn_head/get_mask_head(可选)
  18. # ModelBuilder实现了训练时用到的前向传播forward(data)函数,
  19. #输入data包含训练时的template patch和search patch,以及分类label:label_cls、预测框位置回归label:label_loc,以及位置参数权重:label_loc_weight。
  20. #返回的outputs字典包括了总损失值total_loss/分类损失cls_loss/位置损失loc_loss;
  21. #ModelBuilder类同时实现了推断时模板分支的计算方法template(z)(backbone和neck部分)和搜索分支的计算方法track(x)(backbone和neck部分,以及与模板分支得到的结果一起送入rpn_head部分,
  22. #得到并返回分类和回归结果cls/loc/mask(可选)。
  23. model = ModelBuilder().cuda().train()
  24. # 加载backbone预训练权重至刚刚创建的model
  25. if cfg.BACKBONE.PRETRAINED:
  26. cur_path = os.path.dirname(os.path.realpath(__file__))
  27. backbone_path = os.path.join(cur_path, '../', cfg.BACKBONE.PRETRAINED)
  28. load_pretrain(model.backbone, backbone_path)
  29. # 创建 tensorboard writer
  30. if rank == 0 and cfg.TRAIN.LOG_DIR:
  31. tb_writer = SummaryWriter(cfg.TRAIN.LOG_DIR)
  32. else:
  33. tb_writer = None
  34. # 创建 dataset loader
  35. train_loader = build_data_loader()
  36. # 设置 optimizer 和 lr_scheduler
  37. optimizer, lr_scheduler = build_opt_lr(model,
  38. cfg.TRAIN.START_EPOCH)
  39. # 继续训练
  40. if cfg.TRAIN.RESUME:
  41. logger.info("resume from {}".format(cfg.TRAIN.RESUME))
  42. assert os.path.isfile(cfg.TRAIN.RESUME), \
  43. '{} is not a valid file.'.format(cfg.TRAIN.RESUME)
  44. model, optimizer, cfg.TRAIN.START_EPOCH = \
  45. restore_from(model, optimizer, cfg.TRAIN.RESUME)
  46. # load pretrain
  47. elif cfg.TRAIN.PRETRAINED:
  48. load_pretrain(model, cfg.TRAIN.PRETRAINED)
  49. # 作者自己封装的分布式训练模型,需要将模型副本copy至各个GPU中;
  50. dist_model = DistModule(model)
  51. logger.info(lr_scheduler)
  52. logger.info("model prepare done")
  53. # start training
  54. train(train_loader, dist_model, optimizer, lr_scheduler, tb_writer)

二、ModelBulider

1、backbone

作者嫌弃原来ResNet的stride过大,从而在conv4和conv5中将stride=2改动为stride=1。但是同时为了保持之前的感受野,采用了空洞卷积,dilation=1时即为默认的卷积方式,没有空洞;dilation=2表示卷积时cell与cell间空出1个cell。

改动后ResNet50各层如下:

在每一个layer的第一个Bottleneck模块里进行downsample,因为第一次调用该模块的时候会使得output channel不等于input channel,导致x与经过卷积后的x无法直接相加,所以做一个下采样使得维度保持一致。

layer1:dilation=1, self.inplanes=64*4=256,output channel:64 => 256 

layer2:dilation=1, self.inplanes=128*4=512,output channel:256 => 512,  self.feature_size = 128 * block.expansion = 128*4 = 512

layer3:扩大感受野dilation=2, self.inplanes=256*4=1024,output channel:512 => 1024,  self.feature_size = (256+128) * block.expansion = (256+128)*4 = 1536

layer4:扩大感受野dilation=4, self.inplanes=512*4=2048,output channel:1024 => 2048,  self.feature_size = 512* block.expansion = 512*4 = 2048

  1. ResNet(
  2. (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), bias=False)
  3. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  4. (relu): ReLU(inplace=True)
  5. (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  6. (layer1): Sequential(
  7. (0): Bottleneck(
  8. (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  9. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  10. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  11. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  12. (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  13. (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  14. (relu): ReLU(inplace=True)
  15. (downsample): Sequential(
  16. (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  17. (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  18. )
  19. )
  20. (1): Bottleneck(
  21. (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  22. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  23. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  24. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  25. (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  26. (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  27. (relu): ReLU(inplace=True)
  28. )
  29. (2): Bottleneck(
  30. (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  31. (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  32. (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  33. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  34. (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  35. (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  36. (relu): ReLU(inplace=True)
  37. )
  38. )
  39. (layer2): Sequential(
  40. (0): Bottleneck(
  41. (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  42. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  43. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), bias=False)
  44. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  45. (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  46. (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  47. (relu): ReLU(inplace=True)
  48. (downsample): Sequential(
  49. (0): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), bias=False)
  50. (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  51. )
  52. )
  53. (1): Bottleneck(
  54. (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  55. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  56. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  57. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  58. (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  59. (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  60. (relu): ReLU(inplace=True)
  61. )
  62. (2): Bottleneck(
  63. (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  64. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  65. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  66. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  67. (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  68. (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  69. (relu): ReLU(inplace=True)
  70. )
  71. (3): Bottleneck(
  72. (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
  73. (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  74. (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  75. (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  76. (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  77. (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  78. (relu): ReLU(inplace=True)
  79. )
  80. )
  81. (layer3): Sequential(
  82. (0): Bottleneck(
  83. (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  84. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  85. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  86. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  87. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  88. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  89. (relu): ReLU(inplace=True)
  90. (downsample): Sequential(
  91. (0): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  92. (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  93. )
  94. )
  95. (1): Bottleneck(
  96. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  97. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  98. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
  99. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  100. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  101. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  102. (relu): ReLU(inplace=True)
  103. )
  104. (2): Bottleneck(
  105. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  106. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  107. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
  108. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  109. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  110. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  111. (relu): ReLU(inplace=True)
  112. )
  113. (3): Bottleneck(
  114. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  115. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  116. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
  117. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  118. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  119. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  120. (relu): ReLU(inplace=True)
  121. )
  122. (4): Bottleneck(
  123. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  124. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  125. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
  126. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  127. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  128. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  129. (relu): ReLU(inplace=True)
  130. )
  131. (5): Bottleneck(
  132. (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
  133. (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  134. (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
  135. (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  136. (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
  137. (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  138. (relu): ReLU(inplace=True)
  139. )
  140. )
  141. (layer4): Sequential(
  142. (0): Bottleneck(
  143. (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  144. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  145. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
  146. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  147. (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
  148. (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  149. (relu): ReLU(inplace=True)
  150. (downsample): Sequential(
  151. (0): Conv2d(1024, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
  152. (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  153. )
  154. )
  155. (1): Bottleneck(
  156. (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  157. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  158. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4), bias=False)
  159. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  160. (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
  161. (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  162. (relu): ReLU(inplace=True)
  163. )
  164. (2): Bottleneck(
  165. (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
  166. (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  167. (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4), bias=False)
  168. (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  169. (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
  170. (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  171. (relu): ReLU(inplace=True)
  172. )
  173. )
  174. )

2、neck

neck层是为了backbone和head更好的衔接。作者使用Resnet的layer2 3 4 层的输出(也就是论文图中的conv3 4 5 层)作为输入,在config文件中定义了输出为256,输入为layer2 3 4输出的维度,即(512 =>256)、(1024 =>256)、(2048 =>256)。具体操作为使用1*1的卷积核做了downsample。对于模板帧经过layer234后h*w变为15*15,为了减轻计算量,还进行了从中心点开始裁减7*7的区域作为模板特征。

  1. class AdjustLayer(nn.Module):
  2. def __init__(self, in_channels, out_channels, center_size=7):
  3. super(AdjustLayer, self).__init__()
  4. self.downsample = nn.Sequential(
  5. nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=False),
  6. nn.BatchNorm2d(out_channels),
  7. )
  8. self.center_size = center_size
  9. def forward(self, x):
  10. x = self.downsample(x)
  11. if x.size(3) < 20: # 模板帧经过layer234后变为15*15
  12. l = (x.size(3) - self.center_size) // 2
  13. r = l + self.center_size
  14. x = x[:, :, l:r, l:r]
  15. return x

3、RPN head

先看config文件,使用 MutilRPN,5种anchor,输入维度有3个都是256的,也就是neck输出的3个层,weighed为true

  1. RPN:
  2. TYPE: 'MultiRPN'
  3. KWARGS:
  4. anchor_num: 5
  5. in_channels: [256, 256, 256]
  6. weighted: true

 对应neck输出的3层,每层都进行DepthwiseRPN,然后定义可训练的权重参数self.cls_weight 、self.loc_weight (nn.Parameter()参考pysot基础知识讲解)送入softmax层得到权重值,最后将这三层得到的结果进行加权平均分别得到分类分支和回归分支的特征

  1. class MultiRPN(RPN):
  2. def __init__(self, anchor_num, in_channels, weighted=False):
  3. super(MultiRPN, self).__init__()
  4. self.weighted = weighted
  5. for i in range(len(in_channels)):
  6. self.add_module('rpn'+str(i+2),
  7. DepthwiseRPN(anchor_num, in_channels[i], in_channels[i]))
  8. if self.weighted:
  9. self.cls_weight = nn.Parameter(torch.ones(len(in_channels)))
  10. self.loc_weight = nn.Parameter(torch.ones(len(in_channels)))
  11. def forward(self, z_fs, x_fs):
  12. cls = []
  13. loc = []
  14. for idx, (z_f, x_f) in enumerate(zip(z_fs, x_fs), start=2):
  15. rpn = getattr(self, 'rpn'+str(idx))
  16. c, l = rpn(z_f, x_f)
  17. cls.append(c)
  18. loc.append(l)
  19. if self.weighted:
  20. cls_weight = F.softmax(self.cls_weight, 0)
  21. loc_weight = F.softmax(self.loc_weight, 0)
  22. def avg(lst):
  23. return sum(lst) / len(lst)
  24. def weighted_avg(lst, weight):
  25. s = 0
  26. for i in range(len(weight)):
  27. s += lst[i] * weight[i]
  28. return s
  29. if self.weighted:
  30. return weighted_avg(cls, cls_weight), weighted_avg(loc, loc_weight)
  31. else:
  32. return avg(cls), avg(loc)

在 DepthwiseRPN里,使用DepthwiseXCorr得到输入256,隐藏层256,输出2*5=10的分类feature和输入256,隐藏层256,输出4*5=20的位置feature。

DepthwiseXCorr具体操作为模板帧和检测帧分别经过一个3*3的卷积,输入输出维度不变保持256,然后以模板帧为卷积核,进行深度互相关操作,self.head运算就是升维运算(到2k或者4k),即经过2个1*1的卷积输出输出2k或者4k的feature,可以看出,其发生在xcorr_depthwise之后,进行维度提升。与Siamese RPN网络不同,Siamese RPN++提升网络通道数为2k或者4k的操作是在卷积操作( Cross Correlation)之后,而Siamese RPN网络是在卷积操作之前,这样就减少了大量的计算量。

深度互相关操作具体为:搜索帧拉成1个batchsize*channel × H × W ,模板帧拉成batchsize*channel 个1× H × W 的卷积核,然后将搜索帧分为batchsize*channel个组,进行卷积操作(分组卷积参考这篇博客),这里其实相当于输入每个通道分别和一个卷积核进行卷积了,变成一对一的关系。最后在把结果变回(B,C,H,W)

  1. class DepthwiseRPN(RPN):
  2. def __init__(self, anchor_num=5, in_channels=256, out_channels=256):
  3. super(DepthwiseRPN, self).__init__()
  4. self.cls = DepthwiseXCorr(in_channels, out_channels, 2 * anchor_num)
  5. self.loc = DepthwiseXCorr(in_channels, out_channels, 4 * anchor_num)
  6. def forward(self, z_f, x_f):
  7. cls = self.cls(z_f, x_f)
  8. loc = self.loc(z_f, x_f)
  9. return cls, loc
  10. class DepthwiseXCorr(nn.Module):
  11. def __init__(self, in_channels, hidden, out_channels, kernel_size=3, hidden_kernel_size=5):
  12. super(DepthwiseXCorr, self).__init__()
  13. self.conv_kernel = nn.Sequential(
  14. nn.Conv2d(in_channels, hidden, kernel_size=kernel_size, bias=False),
  15. nn.BatchNorm2d(hidden),
  16. nn.ReLU(inplace=True),
  17. )
  18. self.conv_search = nn.Sequential(
  19. nn.Conv2d(in_channels, hidden, kernel_size=kernel_size, bias=False),
  20. nn.BatchNorm2d(hidden),
  21. nn.ReLU(inplace=True),
  22. )
  23. self.head = nn.Sequential(
  24. nn.Conv2d(hidden, hidden, kernel_size=1, bias=False),
  25. nn.BatchNorm2d(hidden),
  26. nn.ReLU(inplace=True),
  27. nn.Conv2d(hidden, out_channels, kernel_size=1)
  28. )
  29. def forward(self, kernel, search):
  30. kernel = self.conv_kernel(kernel)
  31. search = self.conv_search(search)
  32. feature = xcorr_depthwise(search, kernel)
  33. out = self.head(feature)
  34. return out
  35. def xcorr_depthwise(x, kernel):
  36. """depthwise cross correlation
  37. """
  38. batch = kernel.size(0)
  39. channel = kernel.size(1)
  40. x = x.view(1, batch*channel, x.size(2), x.size(3))
  41. kernel = kernel.view(batch*channel, 1, kernel.size(2), kernel.size(3))
  42. out = F.conv2d(x, kernel, groups=batch*channel)
  43. out = out.view(batch, channel, out.size(2), out.size(3))
  44. return out

4、前向传播

首先将template和search帧传入上述构建的模型(backbone、neck、rpnhead)得到分类结果cls和位置结果loc,然后将cls先按照前景和背景分为2组,然后把组放到最后一维度,以最后一维度为基准送入log_softmax,得到softmax value:cls

再将cls、label_cls送入select_cross_entropy_loss得到分类损失cls_loss;将loc, label_loc, label_loc_weight送入weight_l1_loss得到回归损失loc_loss。返回ouputs=[ total_loss, cls_loss, loc_loss ]

  1. def log_softmax(self, cls):
  2. b, a2, h, w = cls.size()
  3. cls = cls.view(b, 2, a2//2, h, w)
  4. cls = cls.permute(0, 2, 3, 4, 1).contiguous()
  5. cls = F.log_softmax(cls, dim=4)
  6. return cls
  7. def forward(self, data):
  8. """ only used in training
  9. """
  10. template = data['template'].cuda()
  11. search = data['search'].cuda()
  12. label_cls = data['label_cls'].cuda()
  13. label_loc = data['label_loc'].cuda()
  14. label_loc_weight = data['label_loc_weight'].cuda()
  15. # get feature
  16. zf = self.backbone(template)
  17. xf = self.backbone(search)
  18. if cfg.MASK.MASK:
  19. zf = zf[-1]
  20. self.xf_refine = xf[:-1]
  21. xf = xf[-1]
  22. if cfg.ADJUST.ADJUST:
  23. zf = self.neck(zf)
  24. xf = self.neck(xf)
  25. cls, loc = self.rpn_head(zf, xf)
  26. # get loss
  27. cls = self.log_softmax(cls)
  28. cls_loss = select_cross_entropy_loss(cls, label_cls)
  29. loc_loss = weight_l1_loss(loc, label_loc, label_loc_weight)
  30. outputs = {}
  31. outputs['total_loss'] = cfg.TRAIN.CLS_WEIGHT * cls_loss + \
  32. cfg.TRAIN.LOC_WEIGHT * loc_loss
  33. outputs['cls_loss'] = cls_loss
  34. outputs['loc_loss'] = loc_loss
  35. if cfg.MASK.MASK:
  36. # TODO
  37. mask, self.mask_corr_feature = self.mask_head(zf, xf)
  38. mask_loss = None
  39. outputs['total_loss'] += cfg.TRAIN.MASK_WEIGHT * mask_loss
  40. outputs['mask_loss'] = mask_loss
  41. return outputs

三、load_pretrain

  1. def load_pretrain(model, pretrained_path):
  2. logger.info('load pretrained model from {}'.format(pretrained_path))
  3. # 1.初始化device
  4. device = torch.cuda.current_device()
  5. # 2.加载模型权重
  6. pretrained_dict = torch.load(pretrained_path,
  7. map_location=lambda storage, loc: storage.cuda(device))
  8. # 3.移除前缀‘moudel.’(一般用于多GPU分布式训练后的模型)
  9. if "state_dict" in pretrained_dict.keys():
  10. pretrained_dict = remove_prefix(pretrained_dict['state_dict'],
  11. 'module.')
  12. else:
  13. pretrained_dict = remove_prefix(pretrained_dict, 'module.')
  14. # 4.检查键完整性:即判断当前构建的模型参数与待加载的模型参数是否匹配
  15. try:
  16. check_keys(model, pretrained_dict)
  17. except: # 增加前缀"features."
  18. logger.info('[Warning]: using pretrain as features.\
  19. Adding "features." as prefix')
  20. new_dict = {}
  21. for k, v in pretrained_dict.items():
  22. k = 'features.' + k
  23. new_dict[k] = v
  24. pretrained_dict = new_dict
  25. check_keys(model, pretrained_dict)
  26. # 5.装载参数
  27. model.load_state_dict(pretrained_dict, strict=False)
  28. return model

四、build_data_loader

1、anchor

参考该博主(原文连接在参考文献中)画的图,从左到又分别是detection frame(255×255的搜索区域)、featuremap对应的anchor(红点)、生成的anchorbox(蓝框)。featuremap中每个点都要映射回detection frame对应的位置,这也就是代码中ori的来源。

  1. class Anchors:
  2. """
  3. This class generate anchors.
  4. """
  5. def __init__(self, stride, ratios, scales, image_center=0, size=0):
  6. self.stride = stride
  7. self.ratios = ratios
  8. self.scales = scales
  9. self.image_center = image_center
  10. self.size = size
  11. self.anchor_num = len(self.scales) * len(self.ratios) #1*5
  12. self.anchors = None
  13. self.generate_anchors()
  14. def generate_anchors(self):
  15. """
  16. generate anchors based on predefined configuration
  17. """
  18. self.anchors = np.zeros((self.anchor_num, 4), dtype=np.float32) # 5行4列(x1,y1,x2,y2)
  19. size = self.stride * self.stride # 8*8
  20. count = 0
  21. for r in self.ratios:
  22. ws = int(math.sqrt(size*1. / r))
  23. hs = int(ws * r) # r = h/w
  24. for s in self.scales:
  25. w = ws * s
  26. h = hs * s
  27. self.anchors[count][:] = [-w*0.5, -h*0.5, w*0.5, h*0.5][:] #映射为以0,0为中心点的
  28. # 可是它这样写得到的不是‘左上右下点’,而是‘左下右上点’
  29. # 当然也有一种可能那就是坐标系不是常规的那种,而是X轴正常但Y轴箭头是朝下的,这样倒是可以满足‘左上右下’
  30. count += 1
  31. # 整个 def generate_anchors 的作用就是根据设定参数算出每种不同尺度 Bbox 的 ‘左上右下’尺寸范围
  32. # 这样在下面的 def generate_all_anchors 中就可以根据起始点和步长算出整个滑窗过程中的所有 anchorBbox的坐标
  33. def generate_all_anchors(self, im_c, size): #127 25
  34. """
  35. im_c: image center
  36. size: image size
  37. """
  38. if self.image_center == im_c and self.size == size:
  39. return False
  40. self.image_center = im_c
  41. self.size = size
  42. a0x = im_c - size // 2 * self.stride # 127-(25//2*8)=31
  43. # ori就是featuremap上最左上点映射回detection frame(255x255)上最左上角的位置
  44. ori = np.array([a0x] * 4, dtype=np.float32) #[31. 31. 31. 31.]
  45. zero_anchors = self.anchors + ori
  46. x1 = zero_anchors[:, 0]
  47. y1 = zero_anchors[:, 1]
  48. x2 = zero_anchors[:, 2]
  49. y2 = zero_anchors[:, 3]
  50. x1, y1, x2, y2 = map(lambda x: x.reshape(self.anchor_num, 1, 1),
  51. [x1, y1, x2, y2])
  52. cx, cy, w, h = corner2center([x1, y1, x2, y2]) # cx.shape=(5,1,1)
  53. disp_x = np.arange(0, size).reshape(1, 1, -1) * self.stride # disp_x.shape(1,1,25),因为要从feature映射到搜索帧上的坐标,所以乘以步长
  54. disp_y = np.arange(0, size).reshape(1, -1, 1) * self.stride
  55. cx = cx + disp_x # cx.shape=(5,1,25),cx每行元素加到每个disp_x中每个元素,cx共5行,重复5次
  56. cy = cy + disp_y
  57. # broadcast
  58. zero = np.zeros((self.anchor_num, size, size), dtype=np.float32) # zero.shape=(5, 25, 25)
  59. cx, cy, w, h = map(lambda x: x + zero, [cx, cy, w, h]) # cx.shape=(5, 25, 25),每行cx都重复25次,5种anchor,分别在25x25个像素点上画
  60. x1, y1, x2, y2 = center2corner([cx, cy, w, h])
  61. self.all_anchors = (np.stack([x1, y1, x2, y2]).astype(np.float32),
  62. np.stack([cx, cy, w, h]).astype(np.float32))
  63. return True

2、dataset

1)初始化

  1. class TrkDataset(Dataset):
  2. def __init__(self,):
  3. super(TrkDataset, self).__init__()
  4. # 不太明白计算公式如何来的??
  5. desired_size = (cfg.TRAIN.SEARCH_SIZE - cfg.TRAIN.EXEMPLAR_SIZE) / \
  6. cfg.ANCHOR.STRIDE + 1 + cfg.TRAIN.BASE_SIZE
  7. if desired_size != cfg.TRAIN.OUTPUT_SIZE:
  8. raise Exception('size not match!')
  9. # create anchor target 根据设定参数算出每种不同尺度 Bbox 的 ‘左上右下’尺寸范围,并在搜索帧上生成这些anchorbox(5,25,25)
  10. self.anchor_target = AnchorTarget()
  11. # create sub dataset
  12. self.all_dataset = []
  13. start = 0
  14. self.num = 0
  15. for name in cfg.DATASET.NAMES: # ('VID', 'COCO', 'DET', 'YOUTUBEBB')
  16. subdata_cfg = getattr(cfg.DATASET, name)
  17. sub_dataset = SubDataset(
  18. name,
  19. subdata_cfg.ROOT,
  20. subdata_cfg.ANNO,
  21. subdata_cfg.FRAME_RANGE,
  22. subdata_cfg.NUM_USE,
  23. start
  24. )
  25. start += sub_dataset.num # 每个数据集开始的下标
  26. self.num += sub_dataset.num_use # 真正使用的视频的个数
  27. sub_dataset.log()
  28. self.all_dataset.append(sub_dataset)
  29. # data augmentation
  30. # 数据增强
  31. # cfg.DATASET.TEMPLATE.SHIFT : 4
  32. # cfg.DATASET.TEMPLATE.SCALE : 0.05
  33. # cfg.DATASET.TEMPLATE.BLUR : 0.0
  34. # cfg.DATASET.TEMPLATE.FLIP : 0.0
  35. # cfg.DATASET.TEMPLATE.COLOR : 1.0
  36. self.template_aug = Augmentation(
  37. cfg.DATASET.TEMPLATE.SHIFT,
  38. cfg.DATASET.TEMPLATE.SCALE,
  39. cfg.DATASET.TEMPLATE.BLUR,
  40. cfg.DATASET.TEMPLATE.FLIP,
  41. cfg.DATASET.TEMPLATE.COLOR
  42. )
  43. self.search_aug = Augmentation(
  44. cfg.DATASET.SEARCH.SHIFT,
  45. cfg.DATASET.SEARCH.SCALE,
  46. cfg.DATASET.SEARCH.BLUR,
  47. cfg.DATASET.SEARCH.FLIP,
  48. cfg.DATASET.SEARCH.COLOR
  49. )
  50. videos_per_epoch = cfg.DATASET.VIDEOS_PER_EPOCH
  51. self.num = videos_per_epoch if videos_per_epoch > 0 else self.num # 每个epoch使用给定个数视频,或者全部视频
  52. self.num *= cfg.TRAIN.EPOCH # 所有epoch的视频数量
  53. self.pick = self.shuffle() # 再次乱选取的所有的视频(打乱的是选取的视频的序号集,原视频和序号还是正常顺序一一对应的关系)
  54. def shuffle(self):
  55. pick = []
  56. m = 0
  57. while m < self.num:
  58. p = []
  59. for sub_dataset in self.all_dataset:
  60. sub_p = sub_dataset.pick
  61. p += sub_p
  62. np.random.shuffle(p)
  63. pick += p
  64. m = len(pick)
  65. logger.info("shuffle done!")
  66. logger.info("dataset length {}".format(self.num))
  67. return pick[:self.num]
  1. class SubDataset(object):
  2. def __init__(self, name, root, anno, frame_range, num_use, start_idx):
  3. cur_path = os.path.dirname(os.path.realpath(__file__))
  4. self.name = name
  5. self.root = os.path.join(cur_path, '../../', root)
  6. self.anno = os.path.join(cur_path, '../../', anno)
  7. self.frame_range = frame_range # 默认是 100 ,frame_range 的意义应该是模板帧与搜索帧的最大帧数差
  8. self.num_use = num_use # 使用的视频数量,全部使用或者重复使用直到满足num_use个
  9. self.start_idx = start_idx
  10. logger.info("loading " + name)
  11. with open(self.anno, 'r') as f:
  12. meta_data = json.load(f)
  13. meta_data = self._filter_zero(meta_data)
  14. # def _filter_zero这个函数基本起到一个检查和筛选的作用:
  15. # 函数的作用是过滤跟踪框参数不正常的数据,主要过滤的地方有两方面:(1)程序要求的 Bbox 格式是 ‘左上右下’
  16. # 而现在数据集的类标往往是 ‘左上和宽高’ ,所以在制作 json 文件时注意进行换算;(2)数据中很可能出现某一
  17. # 帧没有目标的情况,这时的 Bbox 为 [-1,-1,-1,-1] ,函数可以将这些无效 Bbox 和对应的帧去除掉,剩余的
  18. # 有效数据送入网络
  19. for video in list(meta_data.keys()):
  20. for track in meta_data[video]:
  21. frames = meta_data[video][track]
  22. frames = list(map(int,
  23. filter(lambda x: x.isdigit(), frames.keys())))
  24. # train.json 和 val.json 对于帧数都是用长度为6的字符串表示的(长度必须为 6 )
  25. # 这句代码就是将这些字符串用 int 整型化并存入列表内,所以我们在使用自己的数据集
  26. # 时一定要注意:图片名称里不能有字母的前后缀(例如无人机数据就有个 img 前缀)必须
  27. # 是纯数字
  28. frames.sort()# 排序
  29. meta_data[video][track]['frames'] = frames
  30. # 这是在 '00' 字典内又加入了一个键值,键的名称叫 'frames' ,值的内容就是上面得到的排序后的frame list
  31. if len(frames) <= 0:
  32. logger.warning("{}/{} has no frames".format(video, track))
  33. del meta_data[video][track]
  34. for video in list(meta_data.keys()):
  35. if len(meta_data[video]) <= 0:
  36. logger.warning("{} has no tracks".format(video))
  37. del meta_data[video]
  38. self.labels = meta_data
  39. self.num = len(self.labels) # 就是整个训练阶段或者说 train.json 内的视频数量
  40. self.num_use = self.num if self.num_use == -1 else self.num_use
  41. self.videos = list(meta_data.keys())
  42. logger.info("{} loaded".format(self.name))
  43. self.path_format = '{}.{}.{}.jpg'
  44. self.pick = self.shuffle() # 打乱顺序,重复使用视频,直到视频数量满足num_use

2)get item

3)label

 

 

参考文献

https://blog.csdn.net/weixin_43084225/article/details/108753735#commentBox

https://blog.csdn.net/laizi_laizi/article/details/108279414#2Generate_Anchors_45

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/695552
推荐阅读
相关标签
  

闽ICP备14008679号