当前位置:   article > 正文

OpenVion22.3.x以及Opencv DNN部署yolov5(C++)全过程含代码_openvino-telemetry

openvino-telemetry

部署Openvino在win平台上走了不少坑,这里将从第一步开始进行,避免以后遗忘。

第一步肯定是先把yolo5的工程跑通啦,基本上7.0运行一下会自动下载各种,非常方便,基本不存在复杂的配置过程。

跑通后需要pip一下export.py所需要的openvino包:

  1.  openvino:这一般是OpenVINO的主要安装包,它包含了一系列的工具,库,和插件,用于优化,执行和部署各种深度学习模型。它可能包括但不限于Model Optimizer(模型优化器), nGraph API, Inference Engine(推理引擎),以及不同硬件设备的插件等。                        openvino-dev:这个包通常包含用于开发者的头文件和库。为了开发自己的程序并与OpenVINO库接口,需要用到这些开发工具。​​​​​​                                                        ​​​​​​​        ​​​​​​​        ​​​​​​​openvino-telemetry:这个包与收集和发送运行时的数据或者日志有关,以便在系统运行过程中进行监视或者调试。版本不需要与前两版本一致也不影响。                                                                                                                                                                                                                                                                       

  2.  具体配置export的部分:首先修改自己训练好的的pt模型路径,yaml路径根据自己需求选择导出的格式,这里选择openvino;然后运行一下等着就行啦,如果这里报错dll导入失败大概是版本不对应的问题。然后就得到xml和bin。XML文件:这个文件描述了模型的网络图结构。在这个文件中,你能找到每一层的信息,例如层名称,层类型,层参数,以及它们是如何连接起来的。BIN文件:这是一个二进制文件,包含了模型的权重和偏置。

  3. 配置openvino和opencv部分VS+OpenCV+OpenVINO2022详细配置(更新) - 知乎 (zhihu.com)这部分照这个配置想应的vs环境就行。

  4. 部署代码

    1. #include <opencv2/dnn.hpp>
    2. #include <openvino/openvino.hpp>
    3. #include <opencv2/opencv.hpp>
    4. using namespace std;
    5. const float SCORE_THRESHOLD = 0.2;
    6. const float NMS_THRESHOLD = 0.4;
    7. const float CONFIDENCE_THRESHOLD = 0.4;
    8. struct Detection
    9. {
    10. int class_id;
    11. float confidence;
    12. cv::Rect box;
    13. };
    14. struct Resize
    15. {
    16. cv::Mat resized_image;
    17. int dw;
    18. int dh;
    19. };
    20. Resize resize_and_pad(cv::Mat& img, cv::Size new_shape) {
    21. float width = img.cols;
    22. float height = img.rows;
    23. float r = float(new_shape.width / max(width, height));
    24. int new_unpadW = int(round(width * r));
    25. int new_unpadH = int(round(height * r));
    26. Resize resize;
    27. cv::resize(img, resize.resized_image, cv::Size(new_unpadW, new_unpadH), 0, 0, cv::INTER_AREA);
    28. resize.dw = new_shape.width - new_unpadW;
    29. resize.dh = new_shape.height - new_unpadH;
    30. cv::Scalar color = cv::Scalar(100, 100, 100);
    31. cv::copyMakeBorder(resize.resized_image, resize.resized_image, 0, resize.dh, 0, resize.dw, cv::BORDER_CONSTANT, color);
    32. return resize;
    33. }
    34. int main() {
    35. // Step 1. Initialize OpenVINO Runtime core
    36. ov::Core core;
    37. // Step 2. Read a model
    38. std::shared_ptr<ov::Model> model = core.read_model("D://best.xml", "D://best.bin");
    39. //此处需要自行修改xml和bin的路径
    40. // Step 3. Read input image
    41. // 图像路径
    42. cv::Mat img = cv::imread("D:/p.bmp");
    43. // resize image
    44. Resize res = resize_and_pad(img, cv::Size(640, 640));
    45. // Step 4. Inizialize Preprocessing for the model
    46. ov::preprocess::PrePostProcessor ppp = ov::preprocess::PrePostProcessor(model);
    47. // Specify input image format
    48. ppp.input().tensor().set_element_type(ov::element::u8).set_layout("NHWC").set_color_format(ov::preprocess::ColorFormat::BGR);
    49. // Specify preprocess pipeline to input image without resizing
    50. ppp.input().preprocess().convert_element_type(ov::element::f32).convert_color(ov::preprocess::ColorFormat::RGB).scale({ 255., 255., 255. });
    51. // Specify model's input layout
    52. ppp.input().model().set_layout("NCHW");
    53. // Specify output results format
    54. ppp.output().tensor().set_element_type(ov::element::f32);
    55. // Embed above steps in the graph
    56. model = ppp.build();
    57. ov::CompiledModel compiled_model = core.compile_model(model, "CPU");
    58. // Step 5. Create tensor from image
    59. float* input_data = (float*)res.resized_image.data;
    60. ov::Tensor input_tensor = ov::Tensor(compiled_model.input().get_element_type(), compiled_model.input().get_shape(), input_data);
    61. // Step 6. Create an infer request for model inference
    62. ov::InferRequest infer_request = compiled_model.create_infer_request();
    63. infer_request.set_input_tensor(input_tensor);
    64. //增加计时器统计推理时间
    65. double start = clock();
    66. infer_request.infer();
    67. double end = clock();
    68. double last = start - end;
    69. cout << "Detect Time" << last << "ms" << endl;
    70. //Step 7. Retrieve inference results
    71. const ov::Tensor& output_tensor = infer_request.get_output_tensor();
    72. ov::Shape output_shape = output_tensor.get_shape();
    73. float* detections = output_tensor.data<float>();
    74. // Step 8. Postprocessing including NMS
    75. std::vector<cv::Rect> boxes;
    76. vector<int> class_ids;
    77. vector<float> confidences;
    78. for (int i = 0; i < output_shape[1]; i++) {
    79. float* detection = &detections[i * output_shape[2]];
    80. float confidence = detection[4];
    81. if (confidence >= CONFIDENCE_THRESHOLD) {
    82. float* classes_scores = &detection[5];
    83. cv::Mat scores(1, output_shape[2] - 5, CV_32FC1, classes_scores);
    84. cv::Point class_id;
    85. double max_class_score;
    86. cv::minMaxLoc(scores, 0, &max_class_score, 0, &class_id);
    87. if (max_class_score > SCORE_THRESHOLD) {
    88. confidences.push_back(confidence);
    89. class_ids.push_back(class_id.x);
    90. float x = detection[0];
    91. float y = detection[1];
    92. float w = detection[2];
    93. float h = detection[3];
    94. float xmin = x - (w / 2);
    95. float ymin = y - (h / 2);
    96. boxes.push_back(cv::Rect(xmin, ymin, w, h));
    97. }
    98. }
    99. }
    100. std::vector<int> nms_result;
    101. cv::dnn::NMSBoxes(boxes, confidences, SCORE_THRESHOLD, NMS_THRESHOLD, nms_result);
    102. std::vector<Detection> output;
    103. for (int i = 0; i < nms_result.size(); i++)
    104. {
    105. Detection result;
    106. int idx = nms_result[i];
    107. result.class_id = class_ids[idx];
    108. result.confidence = confidences[idx];
    109. result.box = boxes[idx];
    110. output.push_back(result);
    111. }
    112. // Step 9. Print results and save Figure with detections
    113. for (int i = 0; i < output.size(); i++)
    114. {
    115. auto detection = output[i];
    116. auto box = detection.box;
    117. auto classId = detection.class_id;
    118. auto confidence = detection.confidence;
    119. float rx = (float)img.cols / (float)(res.resized_image.cols - res.dw);
    120. float ry = (float)img.rows / (float)(res.resized_image.rows - res.dh);
    121. box.x = rx * box.x;
    122. box.y = ry * box.y;
    123. box.width = rx * box.width;
    124. box.height = ry * box.height;
    125. cout << "Bbox" << i + 1 << ": Class: " << classId << " "
    126. << "Confidence: " << confidence << " Scaled coords: [ "
    127. << "cx: " << (float)(box.x + (box.width / 2)) / img.cols << ", "
    128. << "cy: " << (float)(box.y + (box.height / 2)) / img.rows << ", "
    129. << "w: " << (float)box.width / img.cols << ", "
    130. << "h: " << (float)box.height / img.rows << " ]" << endl;
    131. float xmax = box.x + box.width;
    132. float ymax = box.y + box.height;
    133. cv::rectangle(img, cv::Point(box.x, box.y), cv::Point(xmax, ymax), cv::Scalar(0, 255, 0), 3);
    134. cv::rectangle(img, cv::Point(box.x, box.y - 20), cv::Point(xmax, box.y), cv::Scalar(0, 255, 0), cv::FILLED);
    135. cv::putText(img, std::to_string(classId), cv::Point(box.x, box.y - 5), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 0, 0));
    136. }
    137. cv::imwrite("D:/pres.bmp", img);
    138. //显示具体结果
    139. cv::namedWindow("ImageWindow", cv::WINDOW_NORMAL);
    140. cv::resizeWindow("ImageWindow", 800, 600);
    141. cv::imshow("ImageWindow", img);
    142. cv::waitKey(0);
    143. cv::destroyAllWindows();
    144. return 0;
    145. }

    正常来说前面没问题这步就能直接跑啦,一般有问题的话可以选择从debug,release上选择哪个,配置是否有d考虑,其他各种崩溃都是版本原因引起,得从版本考虑,不行就重新装一次,版本问题确实麻烦,找起来不是非常方便,到这基本上就配置完成了。

  5. 下面是opencvDNN的部署方式

    OpenCV DNN (Deep Neural Network) 是 OpenCV 库中的一个模块,旨在提供对深度神经网络的支持。它允许你加载、推理和使用预训练的深度学习模型,以进行对象检测、图像分类、姿态估计等计算机视觉任务。OpenCV DNN 模块支持各种深度学习框架训练的模型,使用 OpenCV DNN,你可以通过简单的 API 载入训练好的模型并输入图像进行推理。该模块提供了高性能的计算图执行引擎,允许在 CPU 或者支持 GPU 加速的硬件上实现实时图像处理和分析。总之,OpenCV DNN 为开发者提供了一种方便快捷的方式,利用深度学习模型进行图像处理和计算机视觉任务。

  6. 此处贴上官方方法:Hexmagic/ONNX-yolov5: deploy yolov5 in c++ (github.com)因为原作者是在linux上配置我们是win,所以接下来,我们需要cmake一下cmakelist,然后就会得到一堆文件,新建过程,读取解决方案,当然,我们也可以直接复制他的源码自己建立一个工程也是可以的,不需要cmakelist,这里可以注意下打开后的话就会有四个项目,我们可以移除三个,剩个main即可,配置一下基本上opencv配置一下就行,然后就可以改改路径就可以run了。main.cpp:

    1. include <string>
    2. #include <vector>
    3. #include <fstream>
    4. #include "loguru.hpp"
    5. #include "detector.h"
    6. using namespace cv;
    7. using namespace std;
    8. /* main */
    9. int main(int argc, char *argv[])
    10. {
    11. // 默认参数
    12. std::cout << "OpenCV version : " << CV_VERSION << std::endl;
    13. string model_path = "D://best.onnx";
    14. string img_path = "D://p.bmp";
    15. //string model_path = "3_best.onnx";
    16. //string img_path = "data/images/zidane.jpg";
    17. loguru::init(argc, argv);
    18. Config config = {0.25f, 0.45f, model_path, "E://yolo5//ONNX-yolov5-master//data//coco.names", Size(640, 640),false};
    19. LOG_F(INFO,"Start main process");
    20. Detector detector(config);
    21. LOG_F(INFO,"Load model done ..");
    22. Mat img = imread(img_path, IMREAD_COLOR);
    23. LOG_F(INFO,"Read image from %s", img_path.c_str());
    24. double start = clock();
    25. Detection detection = detector.detect(img);
    26. double end = clock();
    27. double last = start - end;
    28. cout << "Detect Time"<< last << "ms" << endl;
    29. LOG_F(INFO,"Detect process finished");
    30. Colors cl = Colors();
    31. detector.postProcess(img, detection,cl);
    32. LOG_F(INFO,"Post process done save image to assets/output.bmp");
    33. cv::imwrite("E:/yolo5/ONNX-yolov5-master/assets/p.bmp", img);
    34. std::cout << "detect Image And Save to assets/output.bmp" << endl;
    35. return 0;
    36. }

    到这基本上就结束了opencv和openvino部署的过程,大部分问题都可以从版本上寻找,因为部署代码多次测试是没有问题的。tks~

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/你好赵伟/article/detail/161646
推荐阅读
相关标签
  

闽ICP备14008679号