当前位置:   article > 正文

YOLOV5-face视频流_coords[:, 0].clamp_(0, img0_shape[1])

coords[:, 0].clamp_(0, img0_shape[1])

这里我使用的不是原版yoloface的代码,原版代码为一类,五个关键点,我的代码为4类,4个关键点。具体更改办法及相关知识可参阅我之前的博客

yoloV5-face学习笔记_m0_58348465的博客-CSDN博客

修改思路

由于yoloV5-face阉割掉了许多功能,比如视频流,部署,因此我希望能直接在yolov5源代码的detect.py进行修改,使之能直接运行视频流

对二者进行调试,对yolov5的detect.py进行修改

首先将detect.py的

pred = non_max_suppression_face

改为yolov5face的

pred = non_max_suppression

同时记得在上方import相应函数即可。

发现问题

        更改后,yolov5能正常输出边界框,但关键点的位置非常离谱*(yoloface的关键点输出是正常的)。

第一次尝试

调试发现,二者的厚度不同,继续调试发现face输入的是800*640,而yolo输入的是640*512;也就是说二者的img_size不同。将yoloface的img_size默认改为640,如下

  1. def detect_one(model, image_path, device):
  2. # Load model
  3. img_size = 640
  4. conf_thres = 0.3
  5. iou_thres = 0.5

改完后发现,居然关键点位置效果也变好了

好奇的继续对img_size进行更改,结果发现其值在600左右效果最好,为什么?

原因:因为训练的权重选错了,选错成未完全收敛的权重,因此其他一些细微的变化也有着巨大的影响

第二次尝试

        修改后,二者的维度相同,但是关键点还是很离谱。最离谱的是,非极大值抑制函数是复制过去的,而调试过程中pred输出也相同。就在我快绝望之际,我想到可能是由于tensor过大,只有显示出来的部分是完全相同的,其实内部不同。

        通过下列代码可以去掉调试过程中tensor的省略号

  1. import numpy as np
  2. np.set_printoptions(threshold = 1e3)#设置打印数量的阈值
  3. torch.set_printoptions(threshold=np.inf)

加入上述代码后调试,发现二者果然省略号内部不一样。

之后我用文档对比工具解析了model,但是二者架构也是完全相同的。唯一的可能是二者的解析不同。由于yoloface的yolo.py的detect在我调试时自己改过,因此最有可能的原因就是detect模块不同。因此按照yoloface(我修改的四点模型)中修改detect模块,修改结果如下

  1. class Detect(nn.Module):
  2. stride = None # strides computed during build
  3. onnx_dynamic = False # ONNX export parameter
  4. def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer
  5. super().__init__()
  6. self.nc = nc # number of classes
  7. self.no = nc + 5 + 8 # number of outputs per anchor
  8. self.nl = len(anchors) # number of detection layers
  9. self.na = len(anchors[0]) // 2 # number of anchors
  10. self.grid = [torch.zeros(1)] * self.nl # init grid
  11. self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid
  12. self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2)
  13. self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
  14. self.inplace = inplace # use in-place ops (e.g. slice assignment)
  15. def forward(self, x):
  16. z = [] # inference output
  17. for i in range(self.nl):
  18. x[i] = self.m[i](x[i]) # conv
  19. bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
  20. x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
  21. if not self.training: # inference
  22. if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
  23. self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
  24. y = torch.full_like(x[i], 0)
  25. class_range = list(range(5)) + list(range(13,13+self.nc))
  26. y[..., class_range] = x[i][..., class_range].sigmoid() #这里是只对关键点以外的值进行sigmoid
  27. y[..., 5:13] = x[i][..., 5:13]
  28. if self.inplace:
  29. y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy
  30. y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
  31. y[..., 5:7] = y[..., 5:7] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x1 y1
  32. y[..., 7:9] = y[..., 7:9] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x2 y2
  33. y[..., 9:11] = y[..., 9:11] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x3 y3
  34. y[..., 11:13] = y[..., 11:13] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x4 y4
  35. else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
  36. xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy
  37. wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
  38. # y[..., 5:15] = y[..., 5:15] * 8 - 4
  39. y[..., 5:7] = y[..., 5:7] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x1 y1
  40. y[..., 7:9] = y[..., 7:9] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x2 y2
  41. y[..., 9:11] = y[..., 9:11] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x3 y3
  42. y[..., 11:13] = y[..., 11:13] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x4 y4
  43. y = torch.cat((xy, wh, y[..., 4:]), -1)
  44. z.append(y.view(bs, -1, self.no))
  45. return x if self.training else (torch.cat(z, 1), x)
  46. def _make_grid(self, nx=20, ny=20, i=0):
  47. d = self.anchors[i].device
  48. if check_version(torch.__version__, '1.10.0'): # torch>=1.10.0 meshgrid workaround for torch>=0.7 compatibility
  49. yv, xv = torch.meshgrid([torch.arange(ny, device=d), torch.arange(nx, device=d)], indexing='ij')
  50. else:
  51. yv, xv = torch.meshgrid([torch.arange(ny, device=d), torch.arange(nx, device=d)])
  52. grid = torch.stack((xv, yv), 2).expand((1, self.na, ny, nx, 2)).float()
  53. anchor_grid = (self.anchors[i].clone() * self.stride[i]) \
  54. .view((1, self.na, 1, 1, 2)).expand((1, self.na, ny, nx, 2)).float()
  55. return grid, anchor_grid

之后进行调试,发现输出的结果是正确的,所以下一步是让结果可视化

结果输出

为了输出结果,首先要将特征点缩放到图片实际大小,在yolo170行左右,if len(det):后面,加入一句scale_coords_landmarks函数,加入后结果如下

  1. if len(det):
  2. # Rescale boxes from img_size to im0 size
  3. det[:, :4] = scale_coords(im.shape[2:], det[:, :4], im0.shape).round()
  4. det[:, 5:13] = scale_coords_landmarks(im.shape[2:], det[:, 5:13], im0.shape).round()
  5. # Print results
  6. for c in det[:, -1].unique():
  7. n = (det[:, -1] == c).sum() # detections per class
  8. # s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string 改变

同时,我们要在detect.py上方定义scale_coords_landmarks函数代码,代码如下

  1. def scale_coords_landmarks(img1_shape, coords, img0_shape, ratio_pad=None):
  2. # Rescale coords (xyxy) from img1_shape to img0_shape
  3. if ratio_pad is None: # calculate from img0_shape
  4. gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
  5. pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
  6. else:
  7. gain = ratio_pad[0][0]
  8. pad = ratio_pad[1]
  9. coords[:, [0, 2, 4, 6]] -= pad[0] # x padding
  10. coords[:, [1, 3, 5, 7]] -= pad[1] # y padding
  11. coords[:, :8] /= gain
  12. #clip_coords(coords, img0_shape)
  13. coords[:, 0].clamp_(0, img0_shape[1]) # x1
  14. coords[:, 1].clamp_(0, img0_shape[0]) # y1
  15. coords[:, 2].clamp_(0, img0_shape[1]) # x2
  16. coords[:, 3].clamp_(0, img0_shape[0]) # y2
  17. coords[:, 4].clamp_(0, img0_shape[1]) # x3
  18. coords[:, 5].clamp_(0, img0_shape[0]) # y3
  19. coords[:, 6].clamp_(0, img0_shape[1]) # x4
  20. coords[:, 7].clamp_(0, img0_shape[0]) # y4
  21. # coords[:, 8].clamp_(0, img0_shape[1]) # x5
  22. # coords[:, 9].clamp_(0, img0_shape[0]) # y5
  23. return coords

结果可视化(测试版,可以跳过)

最后一步是结果可视化,我们先将detect.py的weite results全部注释掉,如下

  1. # Write results
  2. # for *xyxy, conf, cls in reversed(det):
  3. # if save_txt: # Write to file
  4. # xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
  5. # line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
  6. # with open(txt_path + '.txt', 'a') as f:
  7. # f.write(('%g ' * len(line)).rstrip() % line + '\n')
  8. #
  9. # if save_img or save_crop or view_img: # Add bbox to image
  10. # c = int(cls) # integer class
  11. # label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')
  12. # # annotator.box_label(xyxy, label, color=colors(c, True))
  13. # if save_crop:
  14. # save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)

然后改为这段代码

  1. for j in range(det.size()[0]):
  2. xywh = (xyxy2xywh(det[j, :4].view(1, 4)) / gn).view(-1).tolist()
  3. conf = det[j, 4].cpu().numpy()
  4. landmarks = (det[j, 5:13].view(1, 8) / gn_lks).view(-1).tolist()
  5. class_num = det[j, 13].cpu().numpy()
  6. im0 = show_results(im0, xywh, conf, landmarks, class_num)

类似的,我们要在detect.py上方定义show_results函数,如下

  1. def show_results(img, xywh, conf, landmarks, class_num):
  2. h,w,c = img.shape
  3. tl = 1 or round(0.002 * (h + w) / 2) + 1 # line/font thickness
  4. x1 = int(xywh[0] * w - 0.5 * xywh[2] * w)
  5. y1 = int(xywh[1] * h - 0.5 * xywh[3] * h)
  6. x2 = int(xywh[0] * w + 0.5 * xywh[2] * w)
  7. y2 = int(xywh[1] * h + 0.5 * xywh[3] * h)
  8. cv2.rectangle(img, (x1,y1), (x2, y2), (0,255,0), thickness=tl, lineType=cv2.LINE_AA)
  9. clors = [(255,0,0),(0,255,0),(0,0,255),(255,255,0)]
  10. for i in range(4):
  11. point_x = int(landmarks[2 * i] * w)
  12. point_y = int(landmarks[2 * i + 1] * h)
  13. cv2.circle(img, (point_x, point_y), tl+1, clors[i], -1)
  14. tf = max(tl - 1, 1) # font thickness
  15. label = str(conf)[:5]
  16. cv2.putText(img, label, (x1, y1 - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA)
  17. return img

此外,这段代码用到了没有定义的gn_lks,因此将detect.py 210行左右

gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]

改为

  1. gn = torch.tensor(im0.shape)[[1, 0, 1, 0]].to(device) # normalization gain whwh
  2. gn_lks = torch.tensor(im0.shape)[[1, 0, 1, 0, 1, 0, 1, 0, ]].to(device) # normalization gain landmarks

更改后,结果可正常可视化

结果可视化(最终版)

        虽然上面的更改也可以使结果可视化,但仅仅是将点画到照片或视频上,各种保存为txt之类的功能都没有使用。因此为提高代码的健壮性,我们最好在detect.py上直接修改

        发现了一个很有趣的事情,在run函数(如下)中,默认data是coco128.yaml,但实际上不管data是什么,甚至是一个不存在的文件,都不会对输出结果产生任何影响,而输出的data仍coco128.yaml

  1. @torch.no_grad()
  2. def run(weights=ROOT / 'yolov5s.pt', # model.pt path(s)
  3. source=ROOT / 'data/images', # file/dir/URL/glob, 0 for webcam
  4. data=ROOT / 'data/coco.yaml', # dataset.yaml path
  5. imgsz=(640, 640), # inference size (height, width)

经过分析发现,是因为下面的传入参数的代码默认传入coco128.yaml,也就是说,对于run函数,相当于对他传入了coco128.yaml,因此不会执行默认的coco.yaml;想要更改默认的参数,需要更改下面传入参数的代码,在detect.py的最下方

  1. def parse_opt():
  2. parser = argparse.ArgumentParser()
  3. parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path(s)')
  4. parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob, 0 for webcam')
  5. parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path')

还有另一个有趣的事情。虽然在detect中我添加了yaml文件,里面有类别的名字,但输出的类名还是0,1,2,3.分析后发现,.pt文件中保留了类名,而我在训练集时候类名是0,1,2,3。虽然在detect中添加了yaml文件,但这一步是在读取.pt文件类名之前的,也就是说我的新类名被.pt文件中的类名覆盖了

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小蓝xlanll/article/detail/482720
推荐阅读
相关标签
  

闽ICP备14008679号