当前位置:   article > 正文

opencv dnn模块 示例(24) 目标检测 object_detection 之 yolov8-pose 和 yolov8-obb

opencv dnn模块 示例(24) 目标检测 object_detection 之 yolov8-pose 和 yolov8-obb

前面博文【opencv dnn模块 示例(23) 目标检测 object_detection 之 yolov8】 已经已经详细介绍了yolov8网络和测试。本文继续说明使用yolov8 进行 人体姿态估计 pose旋转目标检测 OBB

1、Yolov8-pose 简单使用

人体姿态估计,使用coco数据集标注格式,17个关键点。

yolov8m-pose.pt 转换得到onnx如下,

(yolo_pytorch) E:\DeepLearning\yolov8-ultralytics>yolo pose export model=yolov8m-pose.pt format=onnx  batch=1 imgsz=640
Ultralytics YOLOv8.0.154  Python-3.9.16 torch-1.13.1+cu117 CPU (Intel Core(TM) i7-7700K 4.20GHz)
YOLOv8m-pose summary (fused): 237 layers, 26447596 parameters, 0 gradients, 81.0 GFLOPs

PyTorch: starting from 'yolov8m-pose.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 56, 8400) (50.8 MB)

ONNX: starting export with onnx 1.14.0 opset 16...
ONNX: export success  3.3s, saved as 'yolov8m-pose.onnx' (101.2 MB)

Export complete (7.1s)
Results saved to E:\DeepLearning\yolov8-ultralytics
Predict:         yolo predict task=pose model=yolov8m-pose.onnx imgsz=640
Validate:        yolo val task=pose model=yolov8m-pose.onnx imgsz=640 data=/usr/src/app/ultralytics/datasets/coco-pose.yaml
Visualize:       https://netron.app
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

输入为640时输出纬度为 (56,8400),56维数据格式定义为 4 + 1 + 17*3:
矩形框box[x,y,w,h],目标置信度conf, 17组关键点 (x, y, conf)。

在后处理中,添加一个保存关键点的数据,一个显示关键点的函数

void postprocess(Mat& frame, cv::Size inputSz, const std::vector<Mat>& outs, Net& net)
{
    // yolov8-pose has an output of shape (batchSize, 56, 8400) (  box[x,y,w,h] + conf + 17*(x,y,conf) )
    ...
    std::vector<cv::Mat> keypoints;

    for(int i = 0; i < rows; ++i) {
        float confidence = data[4];
        if(confidence >= confThreshold) {
				...
				
                boxes.push_back(cv::Rect(left, top, width, height));

                cv::Mat keypoint(1, dimensions - 5, CV_32F, tmp.ptr<float>(i, 5));
                for(int i = 0; i < 17; i++) {
                    keypoint.at<float>(i * 3 + 0) *= x_factor;
                    keypoint.at<float>(i * 3 + 1) *= y_factor;
                }
                keypoints.push_back(keypoint);
            }
        data += dimensions;
    }

    std::vector<int> indices;
    NMSBoxes(boxes, confidences, scoreThreshold, nmsThreshold, indices);

    for(size_t i = 0; i < indices.size(); ++i) {
		...
        drawSkelton(keypoints[idx], frame);
    }
}

std::vector<cv::Scalar> kptcolors = {
     {255, 0, 0}, {255, 85, 0}, {255, 170, 0}, {255, 255, 0}, {170, 255, 0}, {85, 255, 0},
     {0, 255, 0}, {0, 255, 85}, {0, 255, 170}, {0, 255, 255}, {0, 170, 255}, {0, 85, 255},
     {0, 0, 255}, {255, 0, 170}, {170, 0, 255}, {255, 0, 255}, {85, 0, 255},
};

std::vector<std::vector<int>> keypairs = {
    {15, 13},{13, 11},{16, 14},{14, 12},{11, 12},{5, 11},
    {6, 12},{5, 6},{5, 7},{6, 8},{7, 9},{8, 10},{1, 2},
    {0, 1},{0, 2},{1, 3},{2, 4},{3, 5},{4, 6}
};

std::vector<std::vector<int>> keypairs = {
    {15, 13},{13, 11},{16, 14},{14, 12},{11, 12},{5, 11},
    {6, 12},{5, 6},{5, 7},{6, 8},{7, 9},{8, 10},{1, 2},
    {0, 1},{0, 2},{1, 3},{2, 4},{3, 5},{4, 6}
};

void drawSkelton(const Mat& keypoints , Mat& frame)
{
    for(auto& pair : keypairs) {
        auto& pt1 = keypoints.at<cv::Point3f>(pair[0]);
        auto& pt2 = keypoints.at<cv::Point3f>(pair[1]);
        if(pt1.z > 0.5 && pt2.z > 0.5) {
            cv::line(frame, cv::Point(pt1.x, pt1.y), cv::Point(pt2.x, pt2.y), {255,255,0}, 2);
        }
    }
    
    for(int i = 0; i < 17; i++) {
        Point3f pt = keypoints.at<cv::Point3f>(i);
        if(pt.z < 0.5) 
        	continue;   	
        cv::circle(frame, cv::Point(pt.x, pt.y), 3, kptcolors[i], -1);
        cv::putText(frame, cv::format("%d", i), cv::Point(pt.x, pt.y), 1, 1, {255,0,0});
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68

结果如下:
在这里插入图片描述

2、Yolov8-OBB

2024年1月10号ultralytics发布了 v8.1.0 - YOLOv8 Oriented Bounding Boxes (OBB)

YOLOv8框架在在支持分类、对象检测、实例分割、姿态评估的基础上更近一步,现支持旋转对象检测(OBB),基于DOTA数据集,支持航拍图像的15个类别对象检测,包括车辆、船只、典型各种场地等。包含2800多张图像、18W个实例对象。

0: plane
1: baseball-diamond
2: bridge
3: ground-track-field
4: small-vehicle
5: large-vehicle
6: ship
7: tennis-court
8: basketball-court
9: storage-tank
10: soccer-ball-field
11: roundabout
12: harbor
13: swimming-pool

Obb模型在含有15个类别的 DOTAv1 上训练,不同尺度的YOLOv8 OBB模型的精度与输入格式列表如下:

Modelsize
(pixels)
mAPtest
50
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-obb102478.0204.773.573.123.3
YOLOv8s-obb102479.5424.884.0711.476.3
YOLOv8m-obb102480.5763.487.6126.4208.6
YOLOv8l-obb102480.71278.4211.8344.5433.8
YOLOv8x-obb102481.361759.1013.2369.5676.7

官方的船体、车辆检测示例图如下
在这里插入图片描述

2.1、python 命令行测试

例如,使用yolov8m-obb模型进行测试

yolo obb predict model=yolov8m-obb.pt source=t.jpg

Ultralytics YOLOv8.1.19 
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/凡人多烦事01/article/detail/216264
推荐阅读
相关标签