当前位置:   article > 正文

新版Yolov5_DeepSort_Pytorch使用ZQPei行人模型的方法_zqpei reid

zqpei reid

由于号称Yolov5_DeepSort_Pytorch之github官网mikel-brostrom)改版,加入了多种reid,以前版本的reid模型ckpt.t7(由ZQPei提供)不能直接使用。以下记录如何在mikel-brostrom新版中使用osnet reid模型,以及使用ZQPei ckpt.t7模型的方法。

本博文已更新,请移步

经验证,新版Yolov5_DeepSort_Pytorch,用osnet_x1_0, osnet_ain_x1_0均可运行,性能和ZQPei模型差不多,但速度慢。大约40ms vs 20ms/帧的差别。可能的原因,osnet 模型选择resize下所匹配图像大,256x128(h,w), ZQPei ckpt模型resize图像小128x64(h,w)。当然,osnet模型若选小尺寸resize,速度也会加快。
尝试将ZQPei模型写成新版适用的reid方式。
mikel-brostrom引入KaiyangZhou提供的reid,其使用方法如下。
如何导入torchreid:
将KaiyangZhou github克隆下来,放到Yolov5_DeepSort_Pytorch/deep_sort/deep目录下,目录名为改reid,即Yolov5_DeepSort_Pytorch/deep_sort/deep/reid。
假定已经安装了conda和虚拟环境,且安装好Yolov5_DeepSort_Pytorch所需的模块,进入reid目录,运行

python setup.py develop
  • 1

如此,即安装好torchreid,可以在程序中加入import torchreid。
从KaiyangZhou的github中,Model zoo里下载权重文件,例如osnet_x1_0.pth,放到checkpoint目录:Yolov5_DeepSort_Pytorch/deep_sort/deep/checkpoint。
(1)修改deep_sort.yaml

DEEPSORT:
  MODEL_TYPE: "osnet_x1_0"    # "ZQP"
  REID_CKPT:  '~/Yolov5_DeepSort_Pytorch/deep_sort/deep/checkpoint/osnet_x1_0_imagenet.pth'
# REID_CKPT:  '~/Yolov5_DeepSort_Pytorch/deep_sort/deep/checkpoint/ckpt.t7'
  MAX_DIST: 0.1 # 0.2 The matching threshold. Samples with larger distance are considered an invalid match
  MAX_IOU_DISTANCE: 0.7 # 0.7 Gating threshold. Associations with cost larger than this value are disregarded.
  MAX_AGE: 90 # 30 Maximum number of missed misses before a track is deleted
  N_INIT: 3 # 3  Number of frames that a track remains in initialization phase
  NN_BUDGET: 100 # 100 Maximum size of the appearance descriptors gallery
  MIN_CONFIDENCE: 0.75
  NMS_MAX_OVERLAP: 1.0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

(2)track.py中指定reid模型,添加checkpoint路径。

parser.add_argument('--deep_sort_model', type=str, default='osnet_x1_0')
#parser.add_argument('--deep_sort_model', type=str, default='ZQP')

deepsort = DeepSort(deep_sort_model,
                         cfg.DEEPSORT.REID_CKPT,   # 添加checkpoint路径
                        device,
                        max_dist=cfg.DEEPSORT.MAX_DIST,
                        max_iou_distance=cfg.DEEPSORT.MAX_IOU_DISTANCE,
                        max_age=cfg.DEEPSORT.MAX_AGE, n_init=cfg.DEEPSORT.N_INIT, nn_budget=cfg.DEEPSORT.NN_BUDGET,
                        )
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

在deep_sort.yaml文件中给出权重文件路径,可跳过从网上下载权重的过程,直接从本地下载。
如此可运行osnet reid之deepsort跟踪程序track.py。

注:上述修改中,加注释#的部分是采用ZQPei模型。

将ZQPei模型添加到reid中的办法: 模型名称:ZQP, 模型文件名:model_ZQP.py
(3)修改ZQPei github代码中 model.py,
(a)在py文件中添加函数

def ZQP(num_classes=751, pretrained=True, loss='softmax', **kwargs):
    model = Net(
    num_classes=num_classes, 
    pretrained = pretrained,
    loss = 'softmax',
    **kwargs
    )
    return model
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

(b)取消原来的 reid 参数,模型输出项判断更改为self.training

def forward(self, x):
        x = self.conv(x)
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)
        #x = self.avgpool(x)
        x = self.adaptiveavgpool(x)   #此处将avgpool(x)改成adaptiveavgpool(x)适应不同的resize
        x = x.view(x.size(0),-1)
        if not self.training:   # 原 self.reid 参数改成self.training
            x = x.div(x.norm(p=2,dim=1,keepdim=True))    # x 归一化
            return x
        # classifier
        x = self.classifier(x)
        return x
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

为避免名称发生冲突,将原来的model.py改成model_ZQP.py。

(4)在deep_sort/deep/reid/torchreid/models/__init__.py中添加:

from .model_ZQP import *
  • 1

在字典__model_factory中添加自己的模型:

__model_factory = {
# image classification models
    'resnet18': resnet18,
    'resnet34': resnet34,
    'resnet50': resnet50,
......
    'ZQP': ZQP
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

(5)在deep_sort/deep/reid/torchreid/utils/feature_extractor.py中添加模型文件

from deep_sort.deep.reid.torchreid.models.model_ZQP import Net
  • 1

其中的__init__函数中修改image_size, num_classes:

def __init__(
        self,
        model_name='',
        model_path='',
        image_size=(128, 64),           #(256, 128)
        pixel_mean=[0.485, 0.456, 0.406],
        pixel_std=[0.229, 0.224, 0.225],
        pixel_norm=True,
        device='cuda',
        verbose=True
    ):
        # Build model
        model = build_model(
            model_name,
            num_classes=751,              # 注意此处修改为market1501的分类数
            pretrained=True,
            use_gpu=device.startswith('cuda')
        )
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

(6)此外,由于ckpt.t7中的state_dict名称为net_dict,与常规的不一致,需要修改:
修改deep_sort/deep/reid/torchreid/utils/torchtools.py

checkpoint = load_checkpoint(weight_path)
    if 'state_dict' in checkpoint:
        state_dict = checkpoint['state_dict']
    elif 'net_dict' in checkpoint:                                    #   更改,将模型中的net_dict改名为state_dict
        state_dict = checkpoint['net_dict']                   #   更改处。
    else:
        state_dict = checkpoint 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

现在可以运行ZQPei模型ckpt.t7,变更处太多,但可以保持与其他的reid兼容使用feature_extractor.py。

或者直接改feature_extractor.py如下,但只能适用于ckpt.t7,其他的reid都不能用:

from __future__ import absolute_import
import numpy as np
import torch
import torchvision.transforms as T
from PIL import Image
import cv2
from torchreid.utils import (
    check_isfile, load_pretrained_weights, compute_model_complexity
    )
from torchreid.models import build_model
import logging
from deep_sort.deep.reid.torchreid.models.model_ZQP import Net

class FeatureExtractor(object):
    def __init__(
        self,
		model_name='',
		model_path='',
		image_size=(64, 128), # 256, 128 w, h
		pixel_mean=[0.485, 0.456, 0.406],
		pixel_std=[0.229, 0.224, 0.225],
		pixel_norm=True,
		device='cuda'
		):
		self.net = Net(pretrained =True)
			if model_path and check_isfile(model_path):
				self.device = "cuda" if torch.cuda.is_available() else "cpu"
				state_dict = torch.load(model_path, map_location=torch.device(self.device))['net_dict']
				self.net.load_state_dict(state_dict)
			self.net.eval()
			self.size = (64,128) # self.size = (64,128) , self.size(width, height)
		import torchvision.transforms as transforms
		self.norm = transforms.Compose([
		transforms.ToTensor(),
		transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
		])
		device = torch.device(device)
		self.net.to(device)

	def _preprocess(self, im_crops):
		def _resize(im, size):
			return cv2.resize(im.astype(np.float32)/255., size)

		im_batch = torch.cat([self.norm(_resize(im, self.size)).unsqueeze(
			0) for im in im_crops], dim=0).float()
		return im_batch
	def __call__(self, im_crops):
		im_batch = self._preprocess(im_crops)
		with torch.no_grad():
			im_batch = im_batch.to(self.device)
			features = self.net(im_batch)
		return features # .cpu().numpy()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52

总结
兼容方式
(1)修改deep_sort.yaml,指定权重文件路径,模型名称。
(2)track.py命令行变更reid模型名称。
(3)model_ZQP.py修改
(4)__init__.py添加模型名称ZQP
(5)feature_extractor.py修改image_size, num_classes,添加import reid
(6)torchtools.py调整模型net_dict为state_dict
非兼容方式
(1)修改deep_sort.yaml,指定权重文件路径,模型名称。
(2)track.py命令行变更reid模型名称。
(3)model_ZQP.py修改
(4)__init__.py添加模型名称ZQP
(5)替换feature_extractor.py

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/花生_TL007/article/detail/151112
推荐阅读
相关标签
  

闽ICP备14008679号