当前位置:   article > 正文

基于树莓派Qt+opencv+yolov5-Lite+C++部署深度学习推理_树莓派可以搭载深度学习模块么

树莓派可以搭载深度学习模块么

前言:

        本文是基于qt和opencv的dnn深度学习推理模块,在树莓派上部署YOLO系列推理,适用于yolov5-6.1以及yolov5-Lite,相比直接用python的onnxruntime,用基于opencv的dnn模块,利用训练生成的onnx模型,即可快速部署,不需要在树莓派上额外安装深度学习的一系列环境,因为我们知道部署环境是比较玄学的,也是比较麻烦的。

        刚开始使用的是YOLOv5-6.1版本,部署在电脑虚拟机上能达到7fps,但是如果在树莓派上运行,速度极慢,基本上fps在0.3左右,即推理一次所需的时间在3s以上,实际上根本达不到实时的要求。因此,作者换了更轻量化的网络yoloV5-Lite,部署完成后fps大概在1.54左。之前看到过有作者直接用onnxruntime推理fps能达到5左右,可能是因为qt占树莓派运行内存较大,故很慢。

        因此本文首先是使用海康工业相机采集到需要的数据集,然后在PC端利用YOLOV5-Lite网络进行训练,训练完成之后,将生成的.pt(pytorch格式)权重文件转换为.onnx格式,方便后续利用opencv的dnn模块直接推理。在得到了onnx权重文件后,在树莓派上部署推理所需的环境,即安装qt和opencv。配置好环境后,在qt开发一个简单的界面,根据自己的需求,将所有模块融合进去,最后就可以推理成功啦。话不多说,作者的简单界面是这样的。

下面就开始逐步跟大家讲一下实现过程。

一、数据集准备

        由于作者做的是工业相关的一个小项目,基于甲方提供的设备,数据集采集是使用海康威视的工业相机,后续实时推理也是连接了海康工业相机,具体如何在Linux系统下,使用qt成功控制海康工业相机,链接里面有详细说明。有些同学可能使用的是公开的数据集,那这节可以略过。至于给数据集打标签以及配置深度学习环境等一系列问题,作者参考的是良心up炮哥,链接放下面目标检测---教你利用yolov5训练自己的目标检测模型,此外他还有很多博客,可以从0环境到实现搭建深度学习YOLOv5所需的所有环境,因为YOLOV5-Lite和YOLOV5所需的环境是一样的,所以可以直接拿来用。

二、训练模型

        训练模型过程中,都是常用的参数更改,根据自己的数据集路径,更改yaml文件等操作,这里不再赘述,上节链接中作者的博客中都有,触类旁通,yolov5-Lite也是一样的。训练完成后可以发现在train文件夹下exp...某个文件夹中生成了best.pt和last.pt,使用best.pt就可以转换为onnx格式啦。

三、.pt转onnx

        这一步是至关重要的,直接关乎到是否能推理成功。至于如何转onnx,yolov5作者都有写,运行export.py就可以转换成功。

        作者的代码适用于推理YOLOv5-6.1或者onnx输出后整合为单一通道的格式,如下图,其中output就是将三个尺度,也就是80,40,20整合为一个输出,方便后续推理的时候读取。

 当然,也可以使用下图中的格式,只是需要在推理的时候改一下代码,后续会讲到

 四、树莓派部署qt和opencv环境

(1)安装qt5

        对于树莓派安装qt,可以参考以下博客,还是比较简单的,具体可以搜一下教程,基本都可以成功,但是要注意,qt在5.14版本以后变成了online安装,也就是线上安装,因此树莓派必须成功联网,不然安装不上,作者安装的qt版本是5.14.2。

(2)安装opencv

        安装opencv可以说是很玄学的,你是天选之子,那一次就可以成功,但作者并非是,在树莓派配置opencv环境过程中,足足花费了一下午时间,一直在处理报错的路上,作者也整理了一下如何在Linux下配置opencv,下面就细细讲解一下。

        1)下载opencv4.7.0压缩包

        https://opencv.org/releases.html,找到对应的版本安装,但注意,opencv4.5以上版本才支持dnn推理,也就是深度学习模型的读取,不能安装低于4.5的版本。

        2)安装前准备

        在下载好了压缩包后,右键提取到当前目录下,然后进入opencv4.7.0

目录中,并在opencv-4.7.0下创建键build和install两个文件夹,后续编译会用到。

        下面就是安装CmakeCmake-gui以及所依赖的库(若出错可以思考一下是否执行了sudo apt updatesudo apt upgrade),执行如下代码

  1. sudo apt-get install cmake-qt-gui
  2. sudo apt-get install build-essential
  3. sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
        3)开始安装 

            进入build文件夹下编译cmake ,执行命令

cmake-gui

打开了cmake-gui图形化界面,需要更改的地方如下图所示

首先,1是我们解压好的opencv安装包的路径,2处选择build文件夹的路径 ,记得一定要勾选3处Advanced。然后就可以点击configure开始配置了。在configure done之后,我们不需要改路径,直接保持默认路径/usr/local,不然会出现各种报错。如下图

 然后继续configure和generate之后,没有红色提示说明配置成功。如果出现了红色问题,那就搜一搜报错,应该都能搜到解决方法。

        下面就是进入到build文件夹目录下,开始执行命令

  1. sudo make -j4
  2. sudo make install

        开始了漫长的编译之路,希望每位读者都能一次到100%,如果没到,那就开始和作者一样的痛苦改错日记吧。

        4)安装后配置环境变量

        在完成了编译之后,需要配置opencv的环境变量,进入目录

  1. cd /etc/ld.so.conf.d/
  2. sudo vim opencv.conf

创建文件后插入

/usr/local

保存并退出,执行:

sudo ldconfig

下面就是配置bash,

sudo vim /etc/bash.bashrc

在末尾添加:

PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/pkgconfig

export PKG_CONFIG_PATH

保存退出:执行

source /etc/bash.bashrc

最后就是安装mlocate

  1. sudo apt-get update
  2. sudo apt-get install mlocate

执行更新资料库:

sudo updatedb

最后就可以找一个opencv的qt程序测试一下,能不能打开一张图片之类的。opencv的部署就至此结束啦

五、开发qt界面实现推理

   这部分没有什么需要细讲的,直接上代码:

 (1)main.cpp
  1. #include "widget.h"
  2. #include "mainwidget.h"
  3. #include "yolov5.h"
  4. #include <QApplication>
  5. #include <math.h>
  6. using namespace std;
  7. using namespace cv;
  8. using namespace dnn;
  9. int main(int argc, char *argv[])
  10. {
  11. QCoreApplication::setAttribute(Qt::AA_UseSoftwareOpenGL);
  12. QApplication a(argc, argv);
  13. MainWidget mw;
  14. mw.show();
  15. return a.exec();
  16. }

此处是将mainWeight为主窗体运行,其他没有任何额外的功能

(2)头文件

头文件有如上五个,其中mainwidget是主窗体的头文件,MVCamera.h是海康相机相关的,mythread.h是创建一个独立的线程,以此来控制相机传过来的图像间隔,如果单纯检测视频的话,可以不用这个。widget.h也是海康工业相机的相关头文件,yolov5.h是有关推理的,着重讲一下这个,其他的我相信大家都能看懂,我直接附代码

   1)mainwidget.h
  1. #ifndef MAINWIDGET_H
  2. #define MAINWIDGET_H
  3. #define MAX_MAINDEVICE_NUM 255
  4. #include "ui_widget.h"
  5. #include "ui_mainwidget.h"
  6. #include "widget.h"
  7. #include <QFileDialog>
  8. #include <QFile>
  9. #include <opencv2/opencv.hpp>
  10. #include <opencv2/dnn.hpp>
  11. #include <QMainWindow>
  12. #include <QTimer>
  13. #include <QImage>
  14. #include <QPixmap>
  15. #include <QDateTime>
  16. #include <QMutex>
  17. #include <QMutexLocker>
  18. #include <QMimeDatabase>
  19. #include <iostream>
  20. #include "MvCamera.h"
  21. #include <chrono>
  22. #include<math.h>
  23. #include <opencv2/highgui/highgui.hpp>
  24. #include "mythread.h"
  25. #include "yolov5.h"
  26. #include <QLineEdit>
  27. #include <QTextCursor>
  28. using namespace cv;
  29. using namespace std;
  30. using namespace dnn;
  31. QPixmap MatImage(cv::Mat src);
  32. QT_BEGIN_NAMESPACE
  33. namespace Ui {
  34. class MainWidget;
  35. }
  36. class Widget; //前向声明,在qt中要用另一个类时
  37. class CMvCamera;
  38. class YoloV5;
  39. class MainWidget : public QWidget
  40. {
  41. Q_OBJECT
  42. public:
  43. explicit MainWidget(QWidget *parent = nullptr);
  44. //存储相机配置界面的指针
  45. Widget *wid=NULL;
  46. CMvCamera *m_pcMyMainCamera[MAX_MAINDEVICE_NUM]; // 相机指针对象
  47. cv::Mat *myImage_Main = new cv::Mat(); //保存相机图像的图像指针对象
  48. MyThread *myThread_Camera_Mainshow = NULL; //相机画面实时显示线程对象
  49. std::vector<std::string> className1 = { /*"person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light",
  50. "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow",
  51. "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee",
  52. "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
  53. "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple",
  54. "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch",
  55. "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone",
  56. "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear",
  57. "hair drier", "toothbrush" */"OK","NG"};
  58. Ui::MainWidget *ui;
  59. ~MainWidget();
  60. public:
  61. // 状态
  62. bool m_bOpenDevice; // 是否打开设备
  63. bool m_bStartGrabbing; // 是否开始抓图
  64. int m_nTriggerMode; // 触发模式
  65. int m_bContinueStarted;
  66. int num;
  67. // 开启过连续采集图像
  68. MV_SAVE_IAMGE_TYPE m_nSaveImageType; // 保存图像格式
  69. YoloV5 yolov5;
  70. Net net;
  71. private slots:
  72. void on_pbn_start_grab_clicked();
  73. void closeW(); //关闭子窗体的槽函数
  74. void readFrame();
  75. void on_openfile_clicked();
  76. void on_pbn_load_model_clicked();
  77. void on_startdetect_clicked();
  78. void on_stopdetect_clicked();
  79. void on_cbx_modelName_activated(const QString &arg1);
  80. void display_myImage_Main(const Mat *imagePrt);
  81. void getPtr();
  82. void yoloDetect( Mat *imagePrt);
  83. //void showCamera();
  84. void on_checkBox_stateChanged(int arg1);
  85. private:
  86. QTimer *timer;
  87. cv::VideoCapture *capture;
  88. std::vector<cv::Rect> bboxes;
  89. int IsDetect_ok =0;
  90. };
  91. #endif // MAINWIDGET_H
2)MvCamera.h
  1. #ifndef MVCAMERA_H
  2. #define MVCAMERA_H
  3. #include "MvCameraControl.h"
  4. #include <string.h>
  5. #include <QDebug>
  6. #include <stdio.h>
  7. #ifndef MV_NULL
  8. #define MV_NULL 0
  9. #endif
  10. #include "opencv2/opencv.hpp"
  11. #include "opencv2/imgproc/types_c.h"
  12. class CMvCamera
  13. {
  14. public:
  15. CMvCamera();
  16. ~CMvCamera();
  17. // ch:获取SDK版本号 | en:Get SDK Version
  18. static int GetSDKVersion();
  19. // ch:枚举设备 | en:Enumerate Device
  20. static int EnumDevices(unsigned int nTLayerType,
  21. MV_CC_DEVICE_INFO_LIST *pstDevList);
  22. // ch:判断设备是否可达 | en:Is the device accessible
  23. static bool IsDeviceAccessible(MV_CC_DEVICE_INFO *pstDevInfo,
  24. unsigned int nAccessMode);
  25. // ch:打开设备 | en:Open Device
  26. int Open(MV_CC_DEVICE_INFO *pstDeviceInfo);
  27. // ch:关闭设备 | en:Close Device
  28. int Close();
  29. // ch:判断相机是否处于连接状态 | en:Is The Device Connected
  30. bool IsDeviceConnected();
  31. // ch:注册图像数据回调 | en:Register Image Data CallBack
  32. int RegisterImageCallBack(
  33. void(__stdcall *cbOutput)(unsigned char *pData,
  34. MV_FRAME_OUT_INFO_EX *pFrameInfo,
  35. void *pUser),
  36. void *pUser);
  37. // ch:开启抓图 | en:Start Grabbing
  38. int StartGrabbing();
  39. // ch:停止抓图 | en:Stop Grabbing
  40. int StopGrabbing();
  41. // ch:主动获取一帧图像数据 | en:Get one frame initiatively
  42. int GetImageBuffer(MV_FRAME_OUT *pFrame, int nMsec);
  43. // ch:释放图像缓存 | en:Free image buffer
  44. int FreeImageBuffer(MV_FRAME_OUT *pFrame);
  45. // ch:主动获取一帧图像数据 | en:Get one frame initiatively
  46. int GetOneFrameTimeout(unsigned char *pData, unsigned int *pnDataLen,
  47. unsigned int nDataSize,
  48. MV_FRAME_OUT_INFO_EX *pFrameInfo, int nMsec);
  49. // ch:显示一帧图像 | en:Display one frame image
  50. int DisplayOneFrame(MV_DISPLAY_FRAME_INFO *pDisplayInfo);
  51. // ch:设置SDK内部图像缓存节点个数 | en:Set the number of the internal image
  52. // cache nodes in SDK
  53. int SetImageNodeNum(unsigned int nNum);
  54. // ch:获取设备信息 | en:Get device information
  55. int GetDeviceInfo(MV_CC_DEVICE_INFO *pstDevInfo);
  56. // ch:获取GEV相机的统计信息 | en:Get detect info of GEV camera
  57. int GetGevAllMatchInfo(MV_MATCH_INFO_NET_DETECT *pMatchInfoNetDetect);
  58. // ch:获取U3V相机的统计信息 | en:Get detect info of U3V camera
  59. int GetU3VAllMatchInfo(MV_MATCH_INFO_USB_DETECT *pMatchInfoUSBDetect);
  60. // ch:获取和设置Int型参数,如 Width和Height,详细内容参考SDK安装目录下的
  61. // MvCameraNode.xlsx 文件 en:Get Int type parameters, such as Width and
  62. // Height, for details please refer to MvCameraNode.xlsx file under SDK
  63. // installation directory
  64. // int GetIntValue(IN const char* strKey, OUT MVCC_INTVALUE_EX* pIntValue);
  65. int GetIntValue(IN const char *strKey, OUT unsigned int *pnValue);
  66. int SetIntValue(IN const char *strKey, IN int64_t nValue);
  67. // ch:获取和设置Enum型参数,如 PixelFormat,详细内容参考SDK安装目录下的
  68. // MvCameraNode.xlsx 文件 en:Get Enum type parameters, such as PixelFormat,
  69. // for details please refer to MvCameraNode.xlsx file under SDK installation
  70. // directory
  71. int GetEnumValue(IN const char *strKey, OUT MVCC_ENUMVALUE *pEnumValue);
  72. int SetEnumValue(IN const char *strKey, IN unsigned int nValue);
  73. int SetEnumValueByString(IN const char *strKey, IN const char *sValue);
  74. // ch:获取和设置Float型参数,如
  75. // ExposureTime和Gain,详细内容参考SDK安装目录下的 MvCameraNode.xlsx 文件
  76. // en:Get Float type parameters, such as ExposureTime and Gain, for details
  77. // please refer to MvCameraNode.xlsx file under SDK installation directory
  78. int GetFloatValue(IN const char *strKey, OUT MVCC_FLOATVALUE *pFloatValue);
  79. int SetFloatValue(IN const char *strKey, IN float fValue);
  80. // ch:获取和设置Bool型参数,如 ReverseX,详细内容参考SDK安装目录下的
  81. // MvCameraNode.xlsx 文件 en:Get Bool type parameters, such as ReverseX, for
  82. // details please refer to MvCameraNode.xlsx file under SDK installation
  83. // directory
  84. int GetBoolValue(IN const char *strKey, OUT bool *pbValue);
  85. int SetBoolValue(IN const char *strKey, IN bool bValue);
  86. // ch:获取和设置String型参数,如 DeviceUserID,详细内容参考SDK安装目录下的
  87. // MvCameraNode.xlsx 文件UserSetSave en:Get String type parameters, such as
  88. // DeviceUserID, for details please refer to MvCameraNode.xlsx file under
  89. // SDK installation directory
  90. int GetStringValue(IN const char *strKey, MVCC_STRINGVALUE *pStringValue);
  91. int SetStringValue(IN const char *strKey, IN const char *strValue);
  92. // ch:执行一次Command型命令,如 UserSetSave,详细内容参考SDK安装目录下的
  93. // MvCameraNode.xlsx 文件 en:Execute Command once, such as UserSetSave, for
  94. // details please refer to MvCameraNode.xlsx file under SDK installation
  95. // directory
  96. int CommandExecute(IN const char *strKey);
  97. // ch:探测网络最佳包大小(只对GigE相机有效) | en:Detection network optimal
  98. // package size(It only works for the GigE camera)
  99. int GetOptimalPacketSize(unsigned int *pOptimalPacketSize);
  100. // ch:注册消息异常回调 | en:Register Message Exception CallBack
  101. int RegisterExceptionCallBack(
  102. void(__stdcall *cbException)(unsigned int nMsgType, void *pUser),
  103. void *pUser);
  104. // ch:注册单个事件回调 | en:Register Event CallBack
  105. int RegisterEventCallBack(
  106. const char *pEventName,
  107. void(__stdcall *cbEvent)(MV_EVENT_OUT_INFO *pEventInfo, void *pUser),
  108. void *pUser);
  109. // ch:强制IP | en:Force IP
  110. int ForceIp(unsigned int nIP, unsigned int nSubNetMask,
  111. unsigned int nDefaultGateWay);
  112. // ch:配置IP方式 | en:IP configuration method
  113. int SetIpConfig(unsigned int nType);
  114. // ch:设置网络传输模式 | en:Set Net Transfer Mode
  115. int SetNetTransMode(unsigned int nType);
  116. // ch:像素格式转换 | en:Pixel format conversion
  117. int ConvertPixelType(MV_CC_PIXEL_CONVERT_PARAM *pstCvtParam);
  118. // ch:保存图片 | en:save image
  119. int SaveImage(MV_SAVE_IMAGE_PARAM_EX *pstParam);
  120. // ch:保存图片为文件 | en:Save the image as a file
  121. //int SaveImageToFile(MV_SAVE_IMG_TO_FILE_PARAM *pstParam);
  122. //设置是否为触发模式
  123. int setTriggerMode(unsigned int TriggerModeNum);
  124. //设置触发源
  125. int setTriggerSource(unsigned int TriggerSourceNum);
  126. //软触发
  127. int softTrigger();
  128. //读取buffer
  129. int ReadBuffer(cv::Mat &image);
  130. //读取buffer
  131. int ReadBuffer2(cv::Mat &image,bool saveFlag,QByteArray imageName);
  132. //设置曝光时间
  133. int setExposureTime(float ExposureTimeNum);
  134. public:
  135. void *m_hDevHandle;
  136. unsigned int m_nTLayerType;
  137. public:
  138. unsigned char *m_pBufForSaveImage; // 用于保存图像的缓存
  139. unsigned int m_nBufSizeForSaveImage;
  140. unsigned char *m_pBufForDriver; // 用于从驱动获取图像的缓存
  141. unsigned int m_nBufSizeForDriver;
  142. };
  143. #endif // MVCAMERA_H
3)mythread.h
  1. #ifndef MYTHREAD_H
  2. #define MYTHREAD_H
  3. #include "QThread"
  4. #include "MvCamera.h"
  5. #include "opencv2/opencv.hpp"
  6. #include "opencv2/core/core.hpp"
  7. #include "opencv2/calib3d/calib3d.hpp"
  8. #include "opencv2/highgui/highgui.hpp"
  9. #include <vector>
  10. #include <string>
  11. #include <algorithm>
  12. #include <iostream>
  13. #include <iterator>
  14. #include <stdio.h>
  15. #include <stdlib.h>
  16. #include <ctype.h>
  17. #include <QTimer>
  18. using namespace std;
  19. using namespace cv;
  20. class MyThread :public QThread
  21. {
  22. Q_OBJECT
  23. public:
  24. MyThread();
  25. ~MyThread();
  26. void run();
  27. void getCameraPtr(CMvCamera* camera);
  28. void getImagePtr(Mat* image);
  29. void getCameraIndex(int index);
  30. signals:
  31. void mess();
  32. void Display(const Mat* image);
  33. //发送获取图像处理的信号
  34. void GetPtr(Mat* image2);
  35. private:
  36. CMvCamera* cameraPtr = NULL;
  37. cv::Mat* imagePtr = NULL;
  38. //cv::Mat* imagePtr2 = NULL;
  39. int cameraIndex ;
  40. int TriggerMode;
  41. QTimer *time;
  42. };
  43. #endif // MYTHREAD_H
4)widget.h
  1. #ifndef WIDGET_H
  2. #define WIDGET_H
  3. #include <QWidget>
  4. #include <QMessageBox>
  5. #include <QCloseEvent>
  6. #include <QSettings>
  7. #include <QDate>
  8. #include <QDir>
  9. #include "MvCamera.h"
  10. #include "mythread.h"
  11. #include "mainwidget.h"
  12. #define MAX_DEVICE_NUM 2
  13. #define TRIGGER_SOURCE 7
  14. #define EXPOSURE_TIME 40000
  15. #define FRAME 30
  16. #define TRIGGER_ON 1
  17. #define TRIGGER_OFF 0
  18. #define START_GRABBING_ON 1
  19. #define START_GRABBING_OFF 0
  20. #define IMAGE_NAME_LEN 64
  21. QT_BEGIN_NAMESPACE
  22. namespace Ui {
  23. class Widget;
  24. }
  25. QT_END_NAMESPACE
  26. class MainWidget;
  27. class Widget : public QWidget
  28. {
  29. Q_OBJECT
  30. public:
  31. Widget(QWidget *parent = nullptr);
  32. ~Widget();
  33. signals:
  34. void closedWid();
  35. void back();
  36. public:
  37. CMvCamera *m_pcMyCamera[MAX_DEVICE_NUM]; // 相机指针对象
  38. MV_CC_DEVICE_INFO_LIST m_stDevList; // 存储设备列表
  39. cv::Mat *myImage_L = new cv::Mat(); //保存左相机图像的图像指针对象
  40. cv::Mat *myImage_R = new cv::Mat(); //保存右相机有图像的图像指针对象
  41. int devices_num; // 设备数量
  42. MainWidget *mainWid = NULL; //创建一个存储主界面窗体的指针
  43. public:
  44. MyThread *myThread_Camera_show = NULL; //相机实时显示线程对象
  45. private slots:
  46. void on_pbn_enum_camera_clicked();
  47. void on_pbn_open_camera_clicked();
  48. void on_rdo_continue_mode_clicked();
  49. void on_rdo_softigger_mode_clicked();
  50. void on_pbn_start_grabbing_clicked();
  51. void on_pbn_stop_grabbing_clicked();
  52. void on_pbn_software_once_clicked();
  53. void display_myImage_L(const Mat *imagePrt);
  54. void display_myImage_Main(const Mat *imagePrt);
  55. void on_pbn_close_camera_clicked();
  56. void on_pbn_save_BMP_clicked();
  57. void on_pbn_save_JPG_clicked();
  58. void on_le_set_exposure_textChanged(const QString &arg1);
  59. void on_le_set_gain_textChanged(const QString &arg1);
  60. void on_pbn_return_main_clicked();
  61. public:
  62. // 状态
  63. bool m_bOpenDevice; // 是否打开设备
  64. bool m_bStartGrabbing; // 是否开始抓图
  65. int m_nTriggerMode; // 触发模式
  66. int m_bContinueStarted; // 开启过连续采集图像
  67. MV_SAVE_IAMGE_TYPE m_nSaveImageType; // 保存图像格式
  68. private:
  69. QString PrintDeviceInfo(MV_CC_DEVICE_INFO *pstMVDevInfo, int num_index);
  70. QString m_SaveImagePath;
  71. void OpenDevices();
  72. void CloseDevices();
  73. void SaveImage();
  74. void saveImage(QString format,int index);
  75. private:
  76. Ui::Widget *ui;
  77. protected:
  78. void closeEvent(QCloseEvent *event) override; //重写关闭事件处理函数
  79. };
  80. #endif // WIDGET_H
5)yolov5.h
  1. #pragma once
  2. #include<iostream>
  3. #include<opencv2/opencv.hpp>
  4. using namespace std;
  5. #include <QString>
  6. #define YOLO_P6 false //是否使用P6模型
  7. #include<math.h>
  8. struct Output {
  9. int id; //结果类别id
  10. float confidence; //结果置信度
  11. cv::Rect box; //矩形框
  12. };
  13. class YoloV5 {
  14. public:
  15. YoloV5() {
  16. }
  17. ~YoloV5() {}
  18. bool readModel(cv::dnn::Net& net, std::string& netPath, bool isCuda);
  19. bool Detect(cv::Mat& SrcImg, cv::dnn::Net& net, std::vector<Output>& output);
  20. void drawPred(cv::Mat& img, std::vector<Output> result, std::vector<cv::Scalar> color);
  21. void getTime(QString msg);
  22. private:
  23. float Sigmoid(float x) {
  24. return static_cast<float>(1.f / (1.f + exp(-x)));
  25. }
  26. const float netAnchors[3][6] = { { 10.0, 13.0, 16.0, 30.0, 33.0, 23.0 },{ 30.0, 61.0, 62.0, 45.0, 59.0, 119.0 },{ 116.0, 90.0, 156.0, 198.0, 373.0, 326.0 } };
  27. //stride
  28. const float netStride[3] = { 8.0, 16.0, 32.0 };
  29. const int netWidth = 320; //网络模型输入大小
  30. const int netHeight = 320;
  31. float nmsThreshold = 0.45;
  32. float boxThreshold = 0.35;
  33. float classThreshold = 0.35;
  34. public:
  35. std::vector<std::string> className = { /*"person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light",
  36. "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow",
  37. "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee",
  38. "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
  39. "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple",
  40. "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch",
  41. "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone",
  42. "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear",
  43. "hair drier", "toothbrush" */"OK","NG"};
  44. };

这里需要讲一下,对于vector数组className,需要设置自己的类别数,这里需要改一下,因为作者的类别数只有“ok”和“NG”,所有我将原有的数据集coco的80个类别给注释掉了,自己根据需求更改相关参数,都在头文件里。

(3)cpp文件 
1)mainwidget.cpp
  1. #include "mainwidget.h"
  2. #include "ui_mainwidget.h"
  3. #include "widget.h"
  4. #include <QPushButton>
  5. MainWidget::MainWidget(QWidget *parent) :
  6. QWidget(parent),
  7. ui(new Ui::MainWidget)
  8. {
  9. ui->setupUi(this);
  10. setWindowTitle(QStringLiteral("4079Lab"));
  11. ui->lbl_res_pic->setScaledContents(true);
  12. ui->txt_message->setLineWrapMode(QTextEdit::WidgetWidth);
  13. timer =new QTimer(this);
  14. timer->setInterval(30); //设置定时器触发的间隔
  15. num=1;
  16. //将定时器与读取数据流的函数连接起来,每当定时器触发,执行readFrame
  17. connect(timer,&QTimer::timeout,this,&MainWidget::readFrame);
  18. //将定时器与读取相机帧率以及检测的函数连接起来,每当定时器触发,执行yoloDetect
  19. //ui->startdetect->setEnabled(false);
  20. this->wid=new Widget();
  21. //点击相机配置按钮,显示相机配置界面,隐藏主界面
  22. connect(ui->pbn_start_grab,&QPushButton::clicked,wid,[=]()
  23. {
  24. this->hide();
  25. wid->show();
  26. });
  27. //关闭相机配置界面后,显示主界面
  28. connect(this->wid, &Widget::closedWid, this,[=]()
  29. {
  30. this->show();
  31. });
  32. //主界面接受信号后,会显示主界面,隐藏相机界面
  33. connect(this->wid,&Widget::back,this,[=]()
  34. {
  35. this->wid->hide();
  36. this->show();
  37. });
  38. this->myThread_Camera_Mainshow=new MyThread;
  39. connect(wid->myThread_Camera_show, SIGNAL(GetPtr(Mat *)), this,
  40. SLOT(yoloDetect(Mat *)));
  41. //设置打开图片和摄像头button为disable,只有当加载模型后才会Enable
  42. ui->openfile->setEnabled(false);
  43. ui->pbn_start_grab->setEnabled(false);
  44. }
  45. MainWidget::~MainWidget()
  46. {
  47. capture->release();
  48. delete capture;
  49. delete ui;
  50. delete wid;
  51. }
  52. //读取流函数
  53. void MainWidget::readFrame()
  54. {
  55. cv::Mat frame;
  56. capture->read(frame);
  57. if (frame.empty())
  58. return;
  59. auto start = std::chrono::steady_clock::now();
  60. //yolov5->detect(frame);
  61. auto end = std::chrono::steady_clock::now();
  62. std::chrono::duration<double, std::milli> elapsed = end - start;
  63. ui->txt_message->append(QString("cost_time: %1 ms").arg(elapsed.count()));
  64. cv::cvtColor(frame, frame, cv::COLOR_BGR2RGB);
  65. QImage rawImage = QImage((uchar*)(frame.data),frame.cols,frame.rows,frame.step,QImage::Format_RGB888);
  66. ui->lbl_res_pic->setPixmap(QPixmap::fromImage(rawImage));
  67. }
  68. //根据定时器触发实时检测
  69. void MainWidget::yoloDetect(Mat *myImage_Main)
  70. {
  71. //读取相机流
  72. //m_pcMyMainCamera[0]->ReadBuffer(*myImage_Main);
  73. if(myImage_Main!=NULL)
  74. {
  75. cv::Mat temp;
  76. if(myImage_Main->channels()==4)
  77. cv::cvtColor(*myImage_Main,temp,cv::COLOR_BGRA2RGB);
  78. else if (myImage_Main->channels()==3)
  79. cv::cvtColor(*myImage_Main,temp,cv::COLOR_BGR2RGB);
  80. else
  81. cv::cvtColor(*myImage_Main,temp,cv::COLOR_GRAY2RGB);
  82. vector<Scalar> color;
  83. srand(time(0));
  84. for (int i = 0; i < 80; i++) {
  85. int b = rand() % 256;
  86. int g = rand() % 256;
  87. int r = rand() % 256;
  88. color.push_back(Scalar(b, g, r));
  89. }
  90. vector<Output> result;
  91. auto start = std::chrono::steady_clock::now();
  92. yolov5.Detect(temp,net,result);
  93. auto end = std::chrono::steady_clock::now();
  94. std::chrono::duration<double, std::milli> elapsed = end - start;
  95. ui->time_tet->setOverwriteMode(true);
  96. ui->time_tet->setText(QString("%1 ms").arg(elapsed.count()));
  97. ui->txt_message->append(QString("%1 result:%2 const_time: %3 ms").arg(num).arg((className1[result[0].id].data())).arg(elapsed.count()));
  98. yolov5.drawPred(temp, result, color);
  99. QImage img = QImage((uchar*)(temp.data),temp.cols,temp.rows,temp.step,QImage::Format_RGB888);
  100. ui->lbl_res_pic->setPixmap(QPixmap::fromImage(img));
  101. ui->lbl_res_pic->resize(ui->lbl_res_pic->pixmap()->size());
  102. if(ui->checkBox->isChecked())
  103. {
  104. QString savePath="/home/joe/program/VersionDetect/LiteTest/res_img/num.png";
  105. img.save(savePath);
  106. }
  107. result.pop_back();
  108. num++;
  109. }
  110. }
  111. //加载相机界面
  112. void MainWidget::on_pbn_start_grab_clicked()
  113. {
  114. // QString onnxFile = "/home/joe/program/VersionDetect/test2/yolov5s.onnx";
  115. // if (!yolov5->loadModel(onnxFile.toLatin1().data())){
  116. // ui->txt_message->append(QStringLiteral("加载模型失败!"));
  117. // return;
  118. // }
  119. // ui->txt_message->append(QString::fromUtf8("Open onnxFile: %1 succesfully!").arg(onnxFile));
  120. }
  121. //关闭子窗体 显示父窗体
  122. void MainWidget::closeW()
  123. {
  124. this->show();
  125. }
  126. //打开文件按钮的槽函数
  127. void MainWidget::on_openfile_clicked()
  128. {
  129. //过滤除了"*.mp4 *.avi;;*.png *.jpg *.jpeg *.bmp"以外的文件
  130. QString filename = QFileDialog::getOpenFileName(this,QStringLiteral("打开文件"),"/home/joe/onnx",".*.png *.jpg *.jpeg *.bmp;;*.mp4 *.avi");
  131. if(!QFile::exists(filename)){
  132. return;
  133. }
  134. QMimeDatabase db;
  135. QMimeType mime = db.mimeTypeForFile(filename);
  136. if (mime.name().startsWith("image/")) {
  137. cv::Mat src = cv::imread(filename.toLatin1().data());
  138. if(src.empty()){
  139. return;
  140. }
  141. cv::Mat temp;
  142. if(src.channels()==4)
  143. cv::cvtColor(src,temp,cv::COLOR_BGRA2RGB);
  144. else if (src.channels()==3)
  145. cv::cvtColor(src,temp,cv::COLOR_BGR2RGB);
  146. else
  147. cv::cvtColor(src,temp,cv::COLOR_GRAY2RGB);
  148. cv::Mat Rgb;
  149. QImage Img;
  150. if (src.channels() == 3)//RGB Img
  151. {
  152. cv::cvtColor(temp, Rgb, CV_BGR2RGB);//颜色空间转换
  153. Img = QImage((const uchar*)(Rgb.data), Rgb.cols, Rgb.rows, Rgb.cols * Rgb.channels(), QImage::Format_RGB888);
  154. }
  155. else//Gray Img
  156. {
  157. Img = QImage((const uchar*)(temp.data), temp.cols, temp.rows, temp.cols*temp.channels(), QImage::Format_Indexed8);
  158. }
  159. ui->lbl_res_pic->setPixmap(QPixmap::fromImage(Img));
  160. ui->lbl_res_pic->resize(ui->lbl_res_pic->pixmap()->size());
  161. vector<Scalar> color;
  162. srand(time(0));
  163. for (int i = 0; i < 80; i++) {
  164. int b = rand() % 256;
  165. int g = rand() % 256;
  166. int r = rand() % 256;
  167. color.push_back(Scalar(b, g, r));
  168. }
  169. vector<Output> result;
  170. auto start = std::chrono::steady_clock::now();
  171. yolov5.Detect(temp,net,result);
  172. auto end = std::chrono::steady_clock::now();
  173. std::chrono::duration<double, std::milli> elapsed = end - start;
  174. ui->txt_message->append(QString("%1 result:%2 const_time: %3 ms").arg(num).arg((className1[result[0].id].data())).arg(elapsed.count()));
  175. //ui->txt_message->append(QString("cost_time: %1 ms").arg(elapsed.count()));
  176. ui->time_tet->setOverwriteMode(true);
  177. ui->time_tet->setText(QString("%1 ms").arg(elapsed.count()));
  178. yolov5.drawPred(temp, result, color);
  179. QImage img = QImage((uchar*)(temp.data),temp.cols,temp.rows,temp.step,QImage::Format_RGB888);
  180. ui->lbl_res_pic->setPixmap(QPixmap::fromImage(img));
  181. ui->lbl_res_pic->resize(ui->lbl_res_pic->pixmap()->size());
  182. if(ui->checkBox->isChecked())
  183. {
  184. QString savePath="/home/joe/program/VersionDetect/LiteTest/res_img/num.png";
  185. img.save(savePath);
  186. }
  187. num++;
  188. filename.clear();
  189. }else if (mime.name().startsWith("video/")) {
  190. capture->open(filename.toLatin1().data());
  191. if (!capture->isOpened()){
  192. ui->txt_message->append("fail to open MP4!");
  193. return;
  194. }
  195. IsDetect_ok +=1;
  196. // if (IsDetect_ok ==2)
  197. // ui->startdetect->setEnabled(true);
  198. ui->txt_message->append(QString::fromUtf8("Open video: %1 succesfully!").arg(filename));
  199. //获取整个帧数QStringLiteral
  200. long totalFrame = capture->get(cv::CAP_PROP_FRAME_COUNT);
  201. int width = capture->get(cv::CAP_PROP_FRAME_WIDTH);
  202. int height = capture->get(cv::CAP_PROP_FRAME_HEIGHT);
  203. ui->txt_message->append(QStringLiteral("整个视频共 %1 帧, 宽=%2 高=%3 ").arg(totalFrame).arg(width).arg(height));
  204. ui->lbl_res_pic->resize(QSize(width, height));
  205. //设置开始帧()
  206. long frameToStart = 0;
  207. capture->set(cv::CAP_PROP_POS_FRAMES, frameToStart);
  208. ui->txt_message->append(QStringLiteral("从第 %1 帧开始读").arg(frameToStart));
  209. //获取帧率
  210. double rate = capture->get(cv::CAP_PROP_FPS);
  211. ui->txt_message->append(QStringLiteral("帧率为: %1 ").arg(rate));
  212. }
  213. }
  214. void MainWidget::on_pbn_load_model_clicked()
  215. {
  216. QString filePath = QFileDialog::getOpenFileName(this,QStringLiteral("打开文件"),"/home/joe/onnx","*.onnx");
  217. if(!QFile::exists(filePath)){
  218. return;
  219. }
  220. string model_path=filePath.toStdString();
  221. if (yolov5.readModel(net, model_path, true)) {
  222. ui->txt_message->append("模型加载成功");
  223. //提取文件名并显示在TextEdit上
  224. QFileInfo fileInfo(filePath);
  225. QString fileName=fileInfo.fileName();
  226. ui->fileName_tet->setText(fileName);
  227. ui->fileName_tet->show();
  228. ui->openfile->setEnabled(true);
  229. ui->pbn_start_grab->setEnabled(true);
  230. }
  231. else
  232. {
  233. ui->txt_message->append("模型加载失败");
  234. }
  235. }
  236. //获取图像信息
  237. void MainWidget::getPtr()
  238. {
  239. // 触发模式标记一下,切换触发模式时先执行停止采集图像函数
  240. m_bContinueStarted = 1;
  241. if (m_nTriggerMode == TRIGGER_ON) {
  242. // 开始采集之后才创建workthread线程
  243. //开启相机采集
  244. m_pcMyMainCamera[0]->StartGrabbing();
  245. int camera_Index=0;
  246. if (camera_Index == 0) {
  247. myThread_Camera_Mainshow->getCameraPtr(
  248. m_pcMyMainCamera[0]); //线程获取左相机指针
  249. myThread_Camera_Mainshow->getImagePtr(
  250. myImage_Main); //线程获取图像指针
  251. myThread_Camera_Mainshow->getCameraIndex(0); //相机 Index==0
  252. if (!myThread_Camera_Mainshow->isRunning()) {
  253. myThread_Camera_Mainshow->start();
  254. m_pcMyMainCamera[0]->softTrigger();
  255. m_pcMyMainCamera[0]->ReadBuffer(*myImage_Main); //读取Mat格式的图像
  256. }
  257. }
  258. }
  259. }
  260. //在主界面显示
  261. void MainWidget::display_myImage_Main(const Mat *imagePrt)
  262. {
  263. cv::Mat rgb;
  264. cv::cvtColor(*imagePrt, rgb, CV_BGR2RGB);
  265. QImage QmyImage_L;
  266. QmyImage_L = QImage((const unsigned char *)(rgb.data), rgb.cols,
  267. rgb.rows, QImage::Format_RGB888);
  268. QmyImage_L = (QmyImage_L)
  269. .scaled(ui->lbl_res_pic->size(), Qt::IgnoreAspectRatio,
  270. Qt::SmoothTransformation); //饱满填充
  271. //显示图像
  272. //ui->lbl_camera_L->setPixmap(QPixmap::fromImage(QmyImage_L));
  273. ui->lbl_res_pic->setPixmap(QPixmap::fromImage(QmyImage_L));
  274. }
  275. void MainWidget::on_startdetect_clicked()
  276. {
  277. timer->start();
  278. // ui->startdetect->setEnabled(false);
  279. // ui->stopdetect->setEnabled(true);
  280. ui->openfile->setEnabled(false);
  281. ui->pbn_load_model->setEnabled(false);
  282. // ui->cbx_modelName->setEnabled(false);
  283. ui->txt_message->append(QStringLiteral("=======================\n"
  284. " 开始检测\n"
  285. "=======================\n"));
  286. }
  287. void MainWidget::on_stopdetect_clicked()
  288. {
  289. // ui->startdetect->setEnabled(true);
  290. // ui->stopdetect->setEnabled(false);
  291. ui->openfile->setEnabled(true);
  292. ui->pbn_load_model->setEnabled(true);
  293. // ui->cbx_modelName->setEnabled(true);
  294. timer->stop();
  295. ui->txt_message->append(QStringLiteral("======================\n"
  296. " 停止检测\n"
  297. "======================\n"));
  298. }
  299. void MainWidget::on_cbx_modelName_activated(const QString &arg1)
  300. {
  301. }
  302. void MainWidget::on_checkBox_stateChanged(int arg1)
  303. {
  304. if(ui->checkBox->isChecked())
  305. {
  306. ui->txt_message->append("您选择了保存结果,保存路径为/home/joe/program/VersionDetect/LiteTest/res_img");
  307. }
  308. }
2)MvCamera.cpp
  1. #include "MvCamera.h"
  2. #include <stdio.h>
  3. CMvCamera::CMvCamera()
  4. {
  5. m_hDevHandle = MV_NULL;
  6. }
  7. CMvCamera::~CMvCamera()
  8. {
  9. if (m_hDevHandle) {
  10. MV_CC_DestroyHandle(m_hDevHandle);
  11. m_hDevHandle = MV_NULL;
  12. }
  13. }
  14. // ch:获取SDK版本号 | en:Get SDK Version
  15. int CMvCamera::GetSDKVersion()
  16. {
  17. return MV_CC_GetSDKVersion();
  18. }
  19. // ch:枚举设备 | en:Enumerate Device
  20. int CMvCamera::EnumDevices(unsigned int nTLayerType,
  21. MV_CC_DEVICE_INFO_LIST *pstDevList)
  22. {
  23. return MV_CC_EnumDevices(nTLayerType, pstDevList);
  24. }
  25. // ch:判断设备是否可达 | en:Is the device accessible
  26. bool CMvCamera::IsDeviceAccessible(MV_CC_DEVICE_INFO *pstDevInfo,
  27. unsigned int nAccessMode)
  28. {
  29. return MV_CC_IsDeviceAccessible(pstDevInfo, nAccessMode);
  30. }
  31. // ch:打开设备 | en:Open Device
  32. int CMvCamera::Open(MV_CC_DEVICE_INFO *pstDeviceInfo)
  33. {
  34. if (MV_NULL == pstDeviceInfo) {
  35. return MV_E_PARAMETER;
  36. }
  37. if (m_hDevHandle) {
  38. return MV_E_CALLORDER;
  39. }
  40. int nRet = MV_CC_CreateHandle(&m_hDevHandle, pstDeviceInfo);
  41. if (MV_OK != nRet) {
  42. return nRet;
  43. }
  44. nRet = MV_CC_OpenDevice(m_hDevHandle);
  45. if (MV_OK != nRet) {
  46. MV_CC_DestroyHandle(m_hDevHandle);
  47. m_hDevHandle = MV_NULL;
  48. }
  49. return nRet;
  50. }
  51. // ch:关闭设备 | en:Close Device
  52. int CMvCamera::Close()
  53. {
  54. if (MV_NULL == m_hDevHandle) {
  55. return MV_E_HANDLE;
  56. }
  57. MV_CC_CloseDevice(m_hDevHandle);
  58. int nRet = MV_CC_DestroyHandle(m_hDevHandle);
  59. m_hDevHandle = MV_NULL;
  60. return nRet;
  61. }
  62. // ch:判断相机是否处于连接状态 | en:Is The Device Connected
  63. bool CMvCamera::IsDeviceConnected()
  64. {
  65. return MV_CC_IsDeviceConnected(m_hDevHandle);
  66. }
  67. // ch:注册图像数据回调 | en:Register Image Data CallBack
  68. int CMvCamera::RegisterImageCallBack(
  69. void(__stdcall *cbOutput)(unsigned char *pData,
  70. MV_FRAME_OUT_INFO_EX *pFrameInfo, void *pUser),
  71. void *pUser)
  72. {
  73. return MV_CC_RegisterImageCallBackEx(m_hDevHandle, cbOutput, pUser);
  74. }
  75. // ch:开启抓图 | en:Start Grabbing
  76. int CMvCamera::StartGrabbing()
  77. {
  78. return MV_CC_StartGrabbing(m_hDevHandle);
  79. }
  80. // ch:停止抓图 | en:Stop Grabbing
  81. int CMvCamera::StopGrabbing()
  82. {
  83. return MV_CC_StopGrabbing(m_hDevHandle);
  84. }
  85. // ch:主动获取一帧图像数据 | en:Get one frame initiatively
  86. int CMvCamera::GetImageBuffer(MV_FRAME_OUT *pFrame, int nMsec)
  87. {
  88. return MV_CC_GetImageBuffer(m_hDevHandle, pFrame, nMsec);
  89. }
  90. // ch:释放图像缓存 | en:Free image buffer
  91. int CMvCamera::FreeImageBuffer(MV_FRAME_OUT *pFrame)
  92. {
  93. return MV_CC_FreeImageBuffer(m_hDevHandle, pFrame);
  94. }
  95. // ch:主动获取一帧图像数据 | en:Get one frame initiatively
  96. int CMvCamera::GetOneFrameTimeout(unsigned char *pData, unsigned int *pnDataLen,
  97. unsigned int nDataSize,
  98. MV_FRAME_OUT_INFO_EX *pFrameInfo, int nMsec)
  99. {
  100. if (NULL == pnDataLen) {
  101. return MV_E_PARAMETER;
  102. }
  103. int nRet = MV_OK;
  104. *pnDataLen = 0;
  105. nRet = MV_CC_GetOneFrameTimeout(m_hDevHandle, pData, nDataSize, pFrameInfo,
  106. nMsec);
  107. if (MV_OK != nRet) {
  108. return nRet;
  109. }
  110. *pnDataLen = pFrameInfo->nFrameLen;
  111. return nRet;
  112. }
  113. // ch:设置显示窗口句柄 | en:Set Display Window Handle
  114. int CMvCamera::DisplayOneFrame(MV_DISPLAY_FRAME_INFO *pDisplayInfo)
  115. {
  116. return MV_CC_DisplayOneFrame(m_hDevHandle, pDisplayInfo);
  117. }
  118. // ch:设置SDK内部图像缓存节点个数 | en:Set the number of the internal image
  119. // cache nodes in SDK
  120. int CMvCamera::SetImageNodeNum(unsigned int nNum)
  121. {
  122. return MV_CC_SetImageNodeNum(m_hDevHandle, nNum);
  123. }
  124. // ch:获取设备信息 | en:Get device information
  125. int CMvCamera::GetDeviceInfo(MV_CC_DEVICE_INFO *pstDevInfo)
  126. {
  127. return MV_CC_GetDeviceInfo(m_hDevHandle, pstDevInfo);
  128. }
  129. // ch:获取GEV相机的统计信息 | en:Get detect info of GEV camera
  130. int CMvCamera::GetGevAllMatchInfo(MV_MATCH_INFO_NET_DETECT *pMatchInfoNetDetect)
  131. {
  132. if (MV_NULL == pMatchInfoNetDetect) {
  133. return MV_E_PARAMETER;
  134. }
  135. MV_CC_DEVICE_INFO stDevInfo = { 0 };
  136. GetDeviceInfo(&stDevInfo);
  137. if (stDevInfo.nTLayerType != MV_GIGE_DEVICE) {
  138. return MV_E_SUPPORT;
  139. }
  140. MV_ALL_MATCH_INFO struMatchInfo = { 0 };
  141. struMatchInfo.nType = MV_MATCH_TYPE_NET_DETECT;
  142. struMatchInfo.pInfo = pMatchInfoNetDetect;
  143. struMatchInfo.nInfoSize = sizeof(MV_MATCH_INFO_NET_DETECT);
  144. memset(struMatchInfo.pInfo, 0, sizeof(MV_MATCH_INFO_NET_DETECT));
  145. return MV_CC_GetAllMatchInfo(m_hDevHandle, &struMatchInfo);
  146. }
  147. // ch:获取U3V相机的统计信息 | en:Get detect info of U3V camera
  148. int CMvCamera::GetU3VAllMatchInfo(MV_MATCH_INFO_USB_DETECT *pMatchInfoUSBDetect)
  149. {
  150. if (MV_NULL == pMatchInfoUSBDetect) {
  151. return MV_E_PARAMETER;
  152. }
  153. MV_CC_DEVICE_INFO stDevInfo = { 0 };
  154. GetDeviceInfo(&stDevInfo);
  155. if (stDevInfo.nTLayerType != MV_USB_DEVICE) {
  156. return MV_E_SUPPORT;
  157. }
  158. MV_ALL_MATCH_INFO struMatchInfo = { 0 };
  159. struMatchInfo.nType = MV_MATCH_TYPE_USB_DETECT;
  160. struMatchInfo.pInfo = pMatchInfoUSBDetect;
  161. struMatchInfo.nInfoSize = sizeof(MV_MATCH_INFO_USB_DETECT);
  162. memset(struMatchInfo.pInfo, 0, sizeof(MV_MATCH_INFO_USB_DETECT));
  163. return MV_CC_GetAllMatchInfo(m_hDevHandle, &struMatchInfo);
  164. }
  165. // ch:获取和设置Int型参数,如 Width和Height,详细内容参考SDK安装目录下的
  166. // MvCameraNode.xlsx 文件 en:Get Int type parameters, such as Width and Height,
  167. // for details please refer to MvCameraNode.xlsx file under SDK installation
  168. // directory
  169. int CMvCamera::GetIntValue(IN const char *strKey, OUT unsigned int *pnValue)
  170. {
  171. if (NULL == strKey || NULL == pnValue) {
  172. return MV_E_PARAMETER;
  173. }
  174. MVCC_INTVALUE stParam;
  175. memset(&stParam, 0, sizeof(MVCC_INTVALUE));
  176. int nRet = MV_CC_GetIntValue(m_hDevHandle, strKey, &stParam);
  177. if (MV_OK != nRet) {
  178. return nRet;
  179. }
  180. *pnValue = stParam.nCurValue;
  181. return MV_OK;
  182. }
  183. int CMvCamera::SetIntValue(IN const char *strKey, IN int64_t nValue)
  184. {
  185. return MV_CC_SetIntValueEx(m_hDevHandle, strKey, nValue);
  186. }
  187. // ch:获取和设置Enum型参数,如 PixelFormat,详细内容参考SDK安装目录下的
  188. // MvCameraNode.xlsx 文件 en:Get Enum type parameters, such as PixelFormat, for
  189. // details please refer to MvCameraNode.xlsx file under SDK installation
  190. // directory
  191. int CMvCamera::GetEnumValue(IN const char *strKey,
  192. OUT MVCC_ENUMVALUE *pEnumValue)
  193. {
  194. return MV_CC_GetEnumValue(m_hDevHandle, strKey, pEnumValue);
  195. }
  196. int CMvCamera::SetEnumValue(IN const char *strKey, IN unsigned int nValue)
  197. {
  198. return MV_CC_SetEnumValue(m_hDevHandle, strKey, nValue);
  199. }
  200. int CMvCamera::SetEnumValueByString(IN const char *strKey,
  201. IN const char *sValue)
  202. {
  203. return MV_CC_SetEnumValueByString(m_hDevHandle, strKey, sValue);
  204. }
  205. // ch:获取和设置Float型参数,如 ExposureTime和Gain,详细内容参考SDK安装目录下的
  206. // MvCameraNode.xlsx 文件 en:Get Float type parameters, such as ExposureTime and
  207. // Gain, for details please refer to MvCameraNode.xlsx file under SDK
  208. // installation directory
  209. int CMvCamera::GetFloatValue(IN const char *strKey,
  210. OUT MVCC_FLOATVALUE *pFloatValue)
  211. {
  212. return MV_CC_GetFloatValue(m_hDevHandle, strKey, pFloatValue);
  213. }
  214. int CMvCamera::SetFloatValue(IN const char *strKey, IN float fValue)
  215. {
  216. return MV_CC_SetFloatValue(m_hDevHandle, strKey, fValue);
  217. }
  218. // ch:获取和设置Bool型参数,如 ReverseX,详细内容参考SDK安装目录下的
  219. // MvCameraNode.xlsx 文件 en:Get Bool type parameters, such as ReverseX, for
  220. // details please refer to MvCameraNode.xlsx file under SDK installation
  221. // directory
  222. int CMvCamera::GetBoolValue(IN const char *strKey, OUT bool *pbValue)
  223. {
  224. return MV_CC_GetBoolValue(m_hDevHandle, strKey, pbValue);
  225. }
  226. int CMvCamera::SetBoolValue(IN const char *strKey, IN bool bValue)
  227. {
  228. return MV_CC_SetBoolValue(m_hDevHandle, strKey, bValue);
  229. }
  230. // ch:获取和设置String型参数,如 DeviceUserID,详细内容参考SDK安装目录下的
  231. // MvCameraNode.xlsx 文件UserSetSave en:Get String type parameters, such as
  232. // DeviceUserID, for details please refer to MvCameraNode.xlsx file under SDK
  233. // installation directory
  234. int CMvCamera::GetStringValue(IN const char *strKey,
  235. MVCC_STRINGVALUE *pStringValue)
  236. {
  237. return MV_CC_GetStringValue(m_hDevHandle, strKey, pStringValue);
  238. }
  239. int CMvCamera::SetStringValue(IN const char *strKey, IN const char *strValue)
  240. {
  241. return MV_CC_SetStringValue(m_hDevHandle, strKey, strValue);
  242. }
  243. // ch:执行一次Command型命令,如 UserSetSave,详细内容参考SDK安装目录下的
  244. // MvCameraNode.xlsx 文件 en:Execute Command once, such as UserSetSave, for
  245. // details please refer to MvCameraNode.xlsx file under SDK installation
  246. // directory
  247. int CMvCamera::CommandExecute(IN const char *strKey)
  248. {
  249. return MV_CC_SetCommandValue(m_hDevHandle, strKey);
  250. }
  251. // ch:探测网络最佳包大小(只对GigE相机有效) | en:Detection network optimal
  252. // package size(It only works for the GigE camera)
  253. int CMvCamera::GetOptimalPacketSize(unsigned int *pOptimalPacketSize)
  254. {
  255. if (MV_NULL == pOptimalPacketSize) {
  256. return MV_E_PARAMETER;
  257. }
  258. int nRet = MV_CC_GetOptimalPacketSize(m_hDevHandle);
  259. if (nRet < MV_OK) {
  260. return nRet;
  261. }
  262. *pOptimalPacketSize = (unsigned int)nRet;
  263. return MV_OK;
  264. }
  265. // ch:注册消息异常回调 | en:Register Message Exception CallBack
  266. int CMvCamera::RegisterExceptionCallBack(
  267. void(__stdcall *cbException)(unsigned int nMsgType, void *pUser),
  268. void *pUser)
  269. {
  270. return MV_CC_RegisterExceptionCallBack(m_hDevHandle, cbException, pUser);
  271. }
  272. // ch:注册单个事件回调 | en:Register Event CallBack
  273. int CMvCamera::RegisterEventCallBack(
  274. const char *pEventName,
  275. void(__stdcall *cbEvent)(MV_EVENT_OUT_INFO *pEventInfo, void *pUser),
  276. void *pUser)
  277. {
  278. return MV_CC_RegisterEventCallBackEx(m_hDevHandle, pEventName, cbEvent,
  279. pUser);
  280. }
  281. // ch:强制IP | en:Force IP
  282. int CMvCamera::ForceIp(unsigned int nIP, unsigned int nSubNetMask,
  283. unsigned int nDefaultGateWay)
  284. {
  285. return MV_GIGE_ForceIpEx(m_hDevHandle, nIP, nSubNetMask, nDefaultGateWay);
  286. }
  287. // ch:配置IP方式 | en:IP configuration method
  288. int CMvCamera::SetIpConfig(unsigned int nType)
  289. {
  290. return MV_GIGE_SetIpConfig(m_hDevHandle, nType);
  291. }
  292. // ch:设置网络传输模式 | en:Set Net Transfer Mode
  293. int CMvCamera::SetNetTransMode(unsigned int nType)
  294. {
  295. return MV_GIGE_SetNetTransMode(m_hDevHandle, nType);
  296. }
  297. // ch:像素格式转换 | en:Pixel format conversion
  298. int CMvCamera::ConvertPixelType(MV_CC_PIXEL_CONVERT_PARAM *pstCvtParam)
  299. {
  300. return MV_CC_ConvertPixelType(m_hDevHandle, pstCvtParam);
  301. }
  302. // ch:保存图片 | en:save image
  303. int CMvCamera::SaveImage(MV_SAVE_IMAGE_PARAM_EX *pstParam)
  304. {
  305. return MV_CC_SaveImageEx2(m_hDevHandle, pstParam);
  306. }
  307. //ch:保存图片为文件 | en:Save the image as a file
  308. // int CMvCamera::SaveImageToFile(MV_SAVE_IMG_TO_FILE_PARAM *pstSaveFileParam)
  309. //{
  310. // return MV_CC_SaveImageToFile(m_hDevHandle, pstSaveFileParam);
  311. //}
  312. //设置是否为触发模式
  313. int CMvCamera::setTriggerMode(unsigned int TriggerModeNum)
  314. {
  315. // 0Off 1On
  316. int tempValue =
  317. MV_CC_SetEnumValue(m_hDevHandle, "TriggerMode", TriggerModeNum);
  318. if (tempValue != 0) {
  319. return -1;
  320. } else {
  321. return 0;
  322. }
  323. }
  324. //设置触发源
  325. int CMvCamera::setTriggerSource(unsigned int TriggerSourceNum)
  326. {
  327. // 0Line0 1Line1 7:Software
  328. int tempValue =
  329. MV_CC_SetEnumValue(m_hDevHandle, "TriggerSource", TriggerSourceNum);
  330. if (tempValue != 0) {
  331. return -1;
  332. } else {
  333. return 0;
  334. }
  335. }
  336. //发送软触发
  337. int CMvCamera::softTrigger()
  338. {
  339. int tempValue = MV_CC_SetCommandValue(m_hDevHandle, "TriggerSoftware");
  340. if (tempValue != 0) {
  341. return -1;
  342. } else {
  343. return 0;
  344. }
  345. }
  346. //读取相机中的图像
  347. // int ReadBuffer(cv::Mat &image);
  348. int CMvCamera::ReadBuffer(cv::Mat &image)
  349. {
  350. cv::Mat *getImage = new cv::Mat();
  351. unsigned int nRecvBufSize = 0;
  352. MVCC_INTVALUE stParam;
  353. memset(&stParam, 0, sizeof(MVCC_INTVALUE));
  354. int tempValue = MV_CC_GetIntValue(m_hDevHandle, "PayloadSize", &stParam);
  355. if (tempValue != 0) {
  356. return -1;
  357. }
  358. nRecvBufSize = stParam.nCurValue;
  359. unsigned char *pDate;
  360. pDate = (unsigned char *)malloc(nRecvBufSize);
  361. MV_FRAME_OUT_INFO_EX stImageInfo = { 0 };
  362. tempValue = MV_CC_GetOneFrameTimeout(m_hDevHandle, pDate, nRecvBufSize,
  363. &stImageInfo, 500);
  364. if (tempValue != 0) {
  365. return -1;
  366. }
  367. m_nBufSizeForSaveImage =
  368. stImageInfo.nWidth * stImageInfo.nHeight * 3 + 2048;
  369. unsigned char *m_pBufForSaveImage;
  370. m_pBufForSaveImage = (unsigned char *)malloc(m_nBufSizeForSaveImage);
  371. bool isMono;
  372. switch (stImageInfo.enPixelType) {
  373. case PixelType_Gvsp_Mono8:
  374. case PixelType_Gvsp_Mono10:
  375. case PixelType_Gvsp_Mono10_Packed:
  376. case PixelType_Gvsp_Mono12:
  377. case PixelType_Gvsp_Mono12_Packed:
  378. isMono = true;
  379. break;
  380. default:
  381. isMono = false;
  382. break;
  383. }
  384. if (isMono) {
  385. *getImage =
  386. cv::Mat(stImageInfo.nHeight, stImageInfo.nWidth, CV_8UC1, pDate);
  387. // imwrite("d:\\测试opencv_Mono.tif", image);
  388. } else {
  389. //转换图像格式为BGR8
  390. MV_CC_PIXEL_CONVERT_PARAM stConvertParam = { 0 };
  391. memset(&stConvertParam, 0, sizeof(MV_CC_PIXEL_CONVERT_PARAM));
  392. stConvertParam.nWidth = stImageInfo.nWidth; // ch:图像宽 | en:image
  393. // width
  394. stConvertParam.nHeight =
  395. stImageInfo.nHeight; // ch:图像高 | en:image height
  396. // stConvertParam.pSrcData = m_pBufForDriver; //ch:输入数据缓存 |
  397. // en:input data buffer
  398. stConvertParam.pSrcData =
  399. pDate; // ch:输入数据缓存 | en:input data buffer
  400. stConvertParam.nSrcDataLen =
  401. stImageInfo.nFrameLen; // ch:输入数据大小 | en:input data size
  402. stConvertParam.enSrcPixelType =
  403. stImageInfo.enPixelType; // ch:输入像素格式 | en:input pixel format
  404. stConvertParam.enDstPixelType =
  405. PixelType_Gvsp_BGR8_Packed; // ch:输出像素格式 | en:output pixel
  406. // format 适用于OPENCV的图像格式
  407. // stConvertParam.enDstPixelType = PixelType_Gvsp_RGB8_Packed;
  408. //输出像素格式 | en:output pixel format
  409. stConvertParam.pDstBuffer =
  410. m_pBufForSaveImage; // ch:输出数据缓存 | en:output data buffer
  411. stConvertParam.nDstBufferSize =
  412. m_nBufSizeForSaveImage; // ch:输出缓存大小 | en:output buffer size
  413. MV_CC_ConvertPixelType(m_hDevHandle, &stConvertParam);
  414. *getImage = cv::Mat(stImageInfo.nHeight, stImageInfo.nWidth, CV_8UC3,
  415. m_pBufForSaveImage);
  416. }
  417. (*getImage).copyTo(image);
  418. (*getImage).release();
  419. free(pDate);
  420. free(m_pBufForSaveImage);
  421. return 0;
  422. }
  423. //设置曝光时间
  424. int CMvCamera::setExposureTime(float ExposureTimeNum)
  425. {
  426. int tempValue =
  427. MV_CC_SetFloatValue(m_hDevHandle, "ExposureTime", ExposureTimeNum);
  428. if (tempValue != 0) {
  429. return -1;
  430. } else {
  431. return 0;
  432. }
  433. }
3)mythread.cpp
  1. #include "mythread.h"
  2. MyThread::MyThread()
  3. {
  4. }
  5. MyThread::~MyThread()
  6. {
  7. terminate();
  8. if (cameraPtr != NULL) {
  9. delete cameraPtr;
  10. }
  11. if (imagePtr != NULL) {
  12. delete imagePtr;
  13. }
  14. }
  15. void MyThread::getCameraPtr(CMvCamera *camera)
  16. {
  17. cameraPtr = camera;
  18. }
  19. void MyThread::getImagePtr(Mat *image)
  20. {
  21. imagePtr = image;
  22. }
  23. void MyThread::getCameraIndex(int index)
  24. {
  25. cameraIndex = index;
  26. }
  27. void MyThread::run()
  28. {
  29. if (cameraPtr == NULL) {
  30. return;
  31. }
  32. if (imagePtr == NULL) {
  33. return;
  34. }
  35. while (!isInterruptionRequested()) {
  36. std::cout << "Thread_Trigger:" << cameraPtr->softTrigger() << std::endl;
  37. std::cout << "Thread_Readbuffer:" << cameraPtr->ReadBuffer(*imagePtr)
  38. << std::endl;
  39. //cameraPtr->ReadBuffer(*imagePtr2);
  40. emit mess();
  41. emit Display(imagePtr); //发送信号lbl_camera_L接收并显示
  42. emit GetPtr(imagePtr); //发送信号yolo进行检测并显示
  43. msleep(30);
  44. }
  45. }
4)widget.cpp
  1. #include "widget.h"
  2. #include "./ui_widget.h"
  3. #include <stdio.h>
  4. #include <string.h>
  5. #include <unistd.h>
  6. #include <stdlib.h>
  7. #include <fstream>
  8. #include <iostream>
  9. #include <QDebug>
  10. Widget::Widget(QWidget *parent) : QWidget(parent), ui(new Ui::Widget)
  11. {
  12. ui->setupUi(this);
  13. ui->lbl_camera_L->setPixmap(QPixmap(":/icon/MVS.png"));
  14. ui->lbl_camera_L->setScaledContents(true);
  15. //ui->lbl_camera_R->setPixmap(QPixmap("back_img.jpg"));
  16. //ui->lbl_camera_R->setScaledContents(true);
  17. // 相机初始化控件
  18. ui->pbn_enum_camera->setEnabled(true);
  19. ui->pbn_open_camera->setEnabled(false);
  20. ui->pbn_close_camera->setEnabled(false);
  21. ui->cmb_camera_index->setEnabled(false);
  22. // 图像采集控件
  23. ui->rdo_continue_mode->setEnabled(false);
  24. ui->rdo_softigger_mode->setEnabled(false);
  25. ui->pbn_start_grabbing->setEnabled(false);
  26. ui->pbn_stop_grabbing->setEnabled(false);
  27. ui->pbn_software_once->setEnabled(false);
  28. // 参数控件
  29. ui->le_set_exposure->setEnabled(false);
  30. ui->le_set_gain->setEnabled(false);
  31. // 保存图像控件
  32. ui->pbn_save_BMP->setEnabled(false);
  33. ui->pbn_save_JPG->setEnabled(false);
  34. // 线程对象实例化
  35. myThread_Camera_show = new MyThread; //相机线程对象
  36. //发送信号实现页面切换
  37. connect(ui->pbn_return_main,SIGNAL(clicked()),this,SIGNAL(back()));
  38. connect(myThread_Camera_show, SIGNAL(Display(const Mat *)), this,
  39. SLOT(display_myImage_L(const Mat *)));
  40. // 曝光和增益的输入控件限制值
  41. QRegExp int_rx("100000|([0-9]{0,5})");
  42. ui->le_set_exposure->setValidator(new QRegExpValidator(int_rx, this));
  43. QRegExp double_rx("15|([0-9]{0,1}[0-5]{1,2}[\.][0-9]{0,1})");
  44. ui->le_set_gain->setValidator(new QRegExpValidator(double_rx, this));
  45. }
  46. Widget::~Widget()
  47. {
  48. delete ui;
  49. delete mainWid;
  50. delete myThread_Camera_show;
  51. delete myImage_L;
  52. }
  53. //创建关闭子窗体事件,显示主窗体
  54. void Widget::closeEvent(QCloseEvent *event)
  55. {
  56. emit closedWid(); //发射closed信号
  57. event->accept();
  58. }
  59. void Widget::on_pbn_enum_camera_clicked()
  60. {
  61. memset(&m_stDevList, 0, sizeof(MV_CC_DEVICE_INFO_LIST));
  62. int nRet = MV_OK;
  63. nRet = CMvCamera::EnumDevices(MV_GIGE_DEVICE | MV_USB_DEVICE, &m_stDevList);
  64. devices_num = m_stDevList.nDeviceNum;
  65. if (devices_num == 0) {
  66. QString cameraInfo;
  67. cameraInfo =
  68. QString::fromLocal8Bit("暂无设备可连接,请检查设备是否连接正确!");
  69. ui->lbl_camera_messagee->setText(cameraInfo);
  70. }
  71. if (devices_num > 0) {
  72. QString cameraInfo;
  73. for (int i = 0; i < devices_num; i++) {
  74. MV_CC_DEVICE_INFO *pDeviceInfo = m_stDevList.pDeviceInfo[i];
  75. QString cameraInfo_i = PrintDeviceInfo(pDeviceInfo, i);
  76. cameraInfo.append(cameraInfo_i);
  77. cameraInfo.append("\n");
  78. ui->cmb_camera_index->addItem(QString::number(i+1));
  79. }
  80. ui->lbl_camera_messagee->setText(cameraInfo);
  81. ui->pbn_open_camera->setEnabled(true);
  82. ui->cmb_camera_index->setEnabled(true);
  83. }
  84. }
  85. //打印相机的型号、ip等信息
  86. QString Widget::PrintDeviceInfo(MV_CC_DEVICE_INFO *pstMVDevInfo, int num_index)
  87. {
  88. QString cameraInfo_index;
  89. cameraInfo_index = QString::fromLocal8Bit("相机序号:");
  90. cameraInfo_index.append(QString::number(num_index+1));
  91. cameraInfo_index.append("\t\t");
  92. // 海康GIGE协议的相机
  93. if (pstMVDevInfo->nTLayerType == MV_GIGE_DEVICE) {
  94. int nIp1 =
  95. ((pstMVDevInfo->SpecialInfo.stGigEInfo.nCurrentIp & 0xff000000) >>
  96. 24);
  97. int nIp2 =
  98. ((pstMVDevInfo->SpecialInfo.stGigEInfo.nCurrentIp & 0x00ff0000) >>
  99. 16);
  100. int nIp3 =
  101. ((pstMVDevInfo->SpecialInfo.stGigEInfo.nCurrentIp & 0x0000ff00) >>
  102. 8);
  103. int nIp4 =
  104. (pstMVDevInfo->SpecialInfo.stGigEInfo.nCurrentIp & 0x000000ff);
  105. cameraInfo_index.append(QString::fromLocal8Bit("相机型号:"));
  106. std::string str_name =
  107. (char *)pstMVDevInfo->SpecialInfo.stGigEInfo.chModelName;
  108. cameraInfo_index.append(QString::fromStdString(str_name));
  109. cameraInfo_index.append("\n");
  110. cameraInfo_index.append(QString::fromLocal8Bit("当前相机IP地址:"));
  111. cameraInfo_index.append(QString::number(nIp1));
  112. cameraInfo_index.append(".");
  113. cameraInfo_index.append(QString::number(nIp2));
  114. cameraInfo_index.append(".");
  115. cameraInfo_index.append(QString::number(nIp3));
  116. cameraInfo_index.append(".");
  117. cameraInfo_index.append(QString::number(nIp4));
  118. cameraInfo_index.append("\t");
  119. } else if (pstMVDevInfo->nTLayerType == MV_USB_DEVICE) {
  120. cameraInfo_index.append(QString::fromLocal8Bit("相机型号:"));
  121. std::string str_name =
  122. (char *)pstMVDevInfo->SpecialInfo.stUsb3VInfo.chModelName;
  123. cameraInfo_index.append(QString::fromStdString(str_name));
  124. cameraInfo_index.append("\n");
  125. } else {
  126. cameraInfo_index.append(QString::fromLocal8Bit("相机型号:未知"));
  127. }
  128. cameraInfo_index.append(QString::fromLocal8Bit("相机品牌:海康威视"));
  129. return cameraInfo_index;
  130. }
  131. void Widget::on_pbn_open_camera_clicked()
  132. {
  133. ui->pbn_open_camera->setEnabled(false);
  134. ui->pbn_close_camera->setEnabled(true);
  135. ui->rdo_continue_mode->setEnabled(true);
  136. ui->rdo_softigger_mode->setEnabled(true);
  137. ui->rdo_continue_mode->setCheckable(true);
  138. // 参数据控件
  139. ui->le_set_exposure->setEnabled(true);
  140. ui->le_set_gain->setEnabled(true);
  141. OpenDevices();
  142. }
  143. void Widget::OpenDevices()
  144. {
  145. int nRet = MV_OK;
  146. // 创建相机指针对象
  147. for (unsigned int i = 0, j = 0; j < m_stDevList.nDeviceNum; j++, i++) {
  148. m_pcMyCamera[i] = new CMvCamera;
  149. // 相机对象初始化
  150. m_pcMyCamera[i]->m_pBufForDriver = NULL;
  151. m_pcMyCamera[i]->m_pBufForSaveImage = NULL;
  152. m_pcMyCamera[i]->m_nBufSizeForDriver = 0;
  153. m_pcMyCamera[i]->m_nBufSizeForSaveImage = 0;
  154. m_pcMyCamera[i]->m_nTLayerType =
  155. m_stDevList.pDeviceInfo[j]->nTLayerType;
  156. nRet = m_pcMyCamera[i]->Open(m_stDevList.pDeviceInfo[j]); //打开相机
  157. //设置触发模式
  158. m_pcMyCamera[i]->setTriggerMode(TRIGGER_ON);
  159. //设置触发源为软触发
  160. m_pcMyCamera[i]->setTriggerSource(TRIGGER_SOURCE);
  161. //设置曝光时间,初始为40000,并关闭自动曝光模式
  162. m_pcMyCamera[i]->setExposureTime(EXPOSURE_TIME);
  163. m_pcMyCamera[i]->SetEnumValue("ExposureAuto",
  164. MV_EXPOSURE_AUTO_MODE_OFF);
  165. }
  166. }
  167. void Widget::on_rdo_continue_mode_clicked()
  168. {
  169. ui->pbn_start_grabbing->setEnabled(true);
  170. m_nTriggerMode = TRIGGER_ON;
  171. }
  172. void Widget::on_rdo_softigger_mode_clicked()
  173. {
  174. // 如果开始选择的连续模式,切换到触发模式之前,需要先停止采集
  175. if (m_bContinueStarted == 1) {
  176. on_pbn_stop_grabbing_clicked(); //先执行停止采集
  177. }
  178. ui->pbn_start_grabbing->setEnabled(false);
  179. ui->pbn_software_once->setEnabled(true);
  180. m_nTriggerMode = TRIGGER_OFF;
  181. for (unsigned int i = 0; i < m_stDevList.nDeviceNum; i++) {
  182. m_pcMyCamera[i]->setTriggerMode(m_nTriggerMode);
  183. }
  184. }
  185. void Widget::on_pbn_start_grabbing_clicked()
  186. {
  187. // 触发模式标记一下,切换触发模式时先执行停止采集图像函数
  188. m_bContinueStarted = 1;
  189. ui->pbn_start_grabbing->setEnabled(false);
  190. ui->pbn_stop_grabbing->setEnabled(true);
  191. // 保存图像控件
  192. ui->pbn_save_BMP->setEnabled(true);
  193. // 图像采集控件
  194. ui->pbn_save_JPG->setEnabled(true);
  195. int camera_Index = 0;
  196. // 先判断什么模式,再判断是否正在采集
  197. if (m_nTriggerMode == TRIGGER_ON) {
  198. // 开始采集之后才创建workthread线程
  199. for (unsigned int i = 0; i < m_stDevList.nDeviceNum; i++) {
  200. //开启相机采集
  201. m_pcMyCamera[i]->StartGrabbing();
  202. camera_Index = i;
  203. if (camera_Index == 0) {
  204. myThread_Camera_show->getCameraPtr(
  205. m_pcMyCamera[0]); //线程获取左相机指针
  206. myThread_Camera_show->getImagePtr(
  207. myImage_L); //线程获取图像指针
  208. myThread_Camera_show->getCameraIndex(0); //相机 Index==0
  209. if (!myThread_Camera_show->isRunning()) {
  210. myThread_Camera_show->start();
  211. m_pcMyCamera[0]->softTrigger();
  212. m_pcMyCamera[0]->ReadBuffer(*myImage_L); //读取Mat格式的图像
  213. }
  214. }
  215. }
  216. }
  217. }
  218. void Widget::on_pbn_stop_grabbing_clicked()
  219. {
  220. ui->pbn_start_grabbing->setEnabled(true);
  221. ui->pbn_stop_grabbing->setEnabled(false);
  222. for (unsigned int i = 0; i < m_stDevList.nDeviceNum; i++) {
  223. //关闭相机
  224. if (myThread_Camera_show->isRunning()) {
  225. m_pcMyCamera[0]->StopGrabbing();
  226. myThread_Camera_show->requestInterruption();
  227. myThread_Camera_show->wait();
  228. }
  229. }
  230. }
  231. void Widget::on_pbn_software_once_clicked()
  232. {
  233. // 保存图像控件
  234. ui->pbn_save_BMP->setEnabled(true);
  235. ui->pbn_save_JPG->setEnabled(true);
  236. if (m_nTriggerMode == TRIGGER_OFF) {
  237. int nRet = MV_OK;
  238. for (unsigned int i = 0; i < m_stDevList.nDeviceNum; i++) {
  239. //开启相机采集
  240. m_pcMyCamera[i]->StartGrabbing();
  241. if (i == 0) {
  242. nRet = m_pcMyCamera[i]->CommandExecute("TriggerSoftware");
  243. m_pcMyCamera[i]->ReadBuffer(*myImage_L);
  244. display_myImage_L(myImage_L); //左相机图像
  245. display_myImage_Main(myImage_L); //主界面显示
  246. }
  247. }
  248. }
  249. }
  250. //在相机配置界面显示
  251. void Widget::display_myImage_L(const Mat *imagePrt)
  252. {
  253. cv::Mat rgb;
  254. //判断是黑白、彩色图像
  255. QImage QmyImage_L;
  256. if (myImage_L->channels() > 1) {
  257. cv::cvtColor(*imagePrt, rgb, CV_BGR2RGB);
  258. QmyImage_L = QImage((const unsigned char *)(rgb.data), rgb.cols,
  259. rgb.rows, QImage::Format_RGB888);
  260. } else {
  261. cv::cvtColor(*imagePrt, rgb, CV_GRAY2RGB);
  262. QmyImage_L = QImage((const unsigned char *)(rgb.data), rgb.cols,
  263. rgb.rows, QImage::Format_RGB888);
  264. }
  265. QmyImage_L = (QmyImage_L)
  266. .scaled(ui->lbl_camera_L->size(), Qt::IgnoreAspectRatio,
  267. Qt::SmoothTransformation); //饱满填充
  268. ui->lbl_camera_L->setPixmap(QPixmap::fromImage(QmyImage_L));
  269. }
  270. //在主界面显示
  271. void Widget::display_myImage_Main(const Mat *imagePrt)
  272. {
  273. cv::Mat rgb;
  274. cv::cvtColor(*imagePrt, rgb, CV_BGR2RGB);
  275. QImage QmyImage_L;
  276. QmyImage_L = QImage((const unsigned char *)(rgb.data), rgb.cols,
  277. rgb.rows, QImage::Format_RGB888);
  278. QmyImage_L = (QmyImage_L)
  279. .scaled(ui->lbl_camera_L->size(), Qt::IgnoreAspectRatio,
  280. Qt::SmoothTransformation); //饱满填充
  281. QDateTime curdatetime;
  282. if(!QmyImage_L.isNull()){
  283. curdatetime=QDateTime::currentDateTime();
  284. QString str=curdatetime.toString("yyyyMMddhhmmss");
  285. QString filename="/home/joe/picture/"+str+".png";
  286. qDebug()<<"正在保存"<<filename<<"图片,请稍候...";
  287. /* save(arg1,arg2,arg3)重载函数,arg1 代表路径文件名,
  288. * arg2 保存的类型,arg3 代表保存的质量等级 */
  289. QmyImage_L.save(filename,"PNG",-1);
  290. qDebug()<<"保存完成!";
  291. }
  292. //显示图像
  293. this->mainWid=new MainWidget();
  294. //ui->lbl_camera_L->setPixmap(QPixmap::fromImage(QmyImage_L));
  295. mainWid->ui->lbl_res_pic->setPixmap(QPixmap::fromImage(QmyImage_L));
  296. }
  297. void Widget::CloseDevices()
  298. {
  299. for (unsigned int i = 0; i < m_stDevList.nDeviceNum; i++) {
  300. // 关闭线程、相机
  301. if (myThread_Camera_show->isRunning()) {
  302. myThread_Camera_show->requestInterruption();
  303. myThread_Camera_show->wait();
  304. m_pcMyCamera[0]->StopGrabbing();
  305. }
  306. m_pcMyCamera[i]->Close();
  307. }
  308. // 关闭之后再枚举一遍
  309. memset(&m_stDevList, 0,
  310. sizeof(MV_CC_DEVICE_INFO_LIST)); // 初始化设备信息列表
  311. int devices_num = MV_OK;
  312. devices_num =
  313. CMvCamera::EnumDevices(MV_GIGE_DEVICE | MV_USB_DEVICE,
  314. &m_stDevList); // 枚举子网内所有设备,相机设备数量
  315. }
  316. void Widget::on_pbn_close_camera_clicked()
  317. {
  318. ui->pbn_open_camera->setEnabled(true);
  319. ui->pbn_close_camera->setEnabled(false);
  320. // 图像采集控件
  321. ui->rdo_continue_mode->setEnabled(false);
  322. ui->rdo_softigger_mode->setEnabled(false);
  323. ui->pbn_start_grabbing->setEnabled(false);
  324. ui->pbn_stop_grabbing->setEnabled(false);
  325. ui->pbn_software_once->setEnabled(false);
  326. // 参数控件
  327. ui->le_set_exposure->setEnabled(false);
  328. ui->le_set_gain->setEnabled(false);
  329. // 保存图像控件
  330. ui->pbn_save_BMP->setEnabled(false);
  331. ui->pbn_save_JPG->setEnabled(false);
  332. // 关闭设备,销毁线程
  333. CloseDevices();
  334. ui->lbl_camera_messagee->clear();
  335. ui->lbl_camera_L->clear();
  336. ui->lbl_camera_L->setPixmap(QPixmap(":/icon/MVS.png"));
  337. //ui->lbl_camera_R->clear();
  338. }
  339. void Widget::on_pbn_save_BMP_clicked()
  340. {
  341. m_nSaveImageType = MV_Image_Bmp;
  342. SaveImage();
  343. }
  344. void Widget::on_pbn_save_JPG_clicked()
  345. {
  346. m_nSaveImageType = MV_Image_Jpeg;
  347. SaveImage();
  348. }
  349. void Widget::SaveImage()
  350. {
  351. // 获取1张图
  352. MV_FRAME_OUT_INFO_EX stImageInfo = { 0 };
  353. memset(&stImageInfo, 0, sizeof(MV_FRAME_OUT_INFO_EX));
  354. unsigned int nDataLen = 0;
  355. int nRet = MV_OK;
  356. //保存图片的路径
  357. QString saveDirPath="/home/joe/program/VersionDetect/LiteTest/image/";
  358. for (int i = 0; i < devices_num; i++) {
  359. //保存图像的缓存类指针
  360. const char *imageName_c_str=NULL;
  361. // 仅在第一次保存图像时申请缓存,在CloseDevice时释放
  362. if (NULL == m_pcMyCamera[i]->m_pBufForDriver) {
  363. unsigned int nRecvBufSize = 0;
  364. m_pcMyCamera[i]->GetIntValue("PayloadSize", &nRecvBufSize);
  365. m_pcMyCamera[i]->m_nBufSizeForDriver = nRecvBufSize; // 一帧数据大小
  366. m_pcMyCamera[i]->m_pBufForDriver =
  367. (unsigned char *)malloc(m_pcMyCamera[i]->m_nBufSizeForDriver);
  368. }
  369. nRet = m_pcMyCamera[i]->GetOneFrameTimeout(
  370. m_pcMyCamera[i]->m_pBufForDriver, &nDataLen,
  371. m_pcMyCamera[i]->m_nBufSizeForDriver, &stImageInfo, 1000);
  372. if (MV_OK == nRet) {
  373. // 仅在第一次保存图像时申请缓存,在 CloseDevice 时释放
  374. if (NULL == m_pcMyCamera[i]->m_pBufForSaveImage) {
  375. // BMP图片大小:width * height * 3 + 2048(预留BMP头大小)
  376. m_pcMyCamera[i]->m_nBufSizeForSaveImage =
  377. stImageInfo.nWidth * stImageInfo.nHeight * 3 + 2048;
  378. m_pcMyCamera[i]->m_pBufForSaveImage = (unsigned char *)malloc(
  379. m_pcMyCamera[i]->m_nBufSizeForSaveImage);
  380. }
  381. // 设置对应的相机参数
  382. MV_SAVE_IMAGE_PARAM_EX stParam = { 0 };
  383. stParam.enImageType = m_nSaveImageType; // 需要保存的图像类型
  384. stParam.enPixelType = stImageInfo.enPixelType; // 相机对应的像素格式
  385. stParam.nBufferSize =
  386. m_pcMyCamera[i]->m_nBufSizeForSaveImage; // 存储节点的大小
  387. stParam.nWidth = stImageInfo.nWidth; // 相机对应的宽
  388. stParam.nHeight = stImageInfo.nHeight; // 相机对应的高
  389. stParam.nDataLen = stImageInfo.nFrameLen;
  390. stParam.pData = m_pcMyCamera[i]->m_pBufForDriver;
  391. stParam.pImageBuffer = m_pcMyCamera[i]->m_pBufForSaveImage;
  392. stParam.nJpgQuality = 90; // jpg编码,仅在保存Jpg图像时有效
  393. nRet = m_pcMyCamera[i]->SaveImage(&stParam);
  394. QString image_name;
  395. //图像名称
  396. char chImageName[IMAGE_NAME_LEN] = {0};
  397. if (MV_Image_Bmp == stParam.enImageType) {
  398. if (i == 0) {
  399. snprintf(chImageName, IMAGE_NAME_LEN,
  400. "Image_w%d_h%d_fn%d_L.bmp", stImageInfo.nWidth,
  401. stImageInfo.nHeight, stImageInfo.nFrameNum);
  402. image_name = "Image_w";
  403. image_name.append(QString::number(stImageInfo.nWidth));
  404. image_name.append("_h");
  405. image_name.append(QString::number(stImageInfo.nHeight));
  406. image_name.append("_fn");
  407. image_name.append(QString::number(stImageInfo.nFrameNum));
  408. image_name.append("_L.bmp");
  409. }
  410. if (i == 1) {
  411. snprintf(chImageName, IMAGE_NAME_LEN,
  412. "Image_w%d_h%d_fn%03d_R.bmp", stImageInfo.nWidth,
  413. stImageInfo.nHeight, stImageInfo.nFrameNum);
  414. }
  415. } else if (MV_Image_Jpeg == stParam.enImageType) {
  416. if (i == 0) {
  417. snprintf(chImageName, IMAGE_NAME_LEN,
  418. "Image_w%d_h%d_fn%d_L.jpg", stImageInfo.nWidth,
  419. stImageInfo.nHeight, stImageInfo.nFrameNum);
  420. image_name = "Image_w";
  421. image_name.append(QString::number(stImageInfo.nWidth));
  422. image_name.append("_h");
  423. image_name.append(QString::number(stImageInfo.nHeight));
  424. image_name.append("_fn");
  425. image_name.append(QString::number(stImageInfo.nFrameNum));
  426. image_name.append("_L.jpg");
  427. }
  428. if (i == 1) {
  429. snprintf(chImageName, IMAGE_NAME_LEN,
  430. "Image_w%d_h%d_fn%03d_R.jpg", stImageInfo.nWidth,
  431. stImageInfo.nHeight, stImageInfo.nFrameNum);
  432. }
  433. }
  434. QString imagePath = saveDirPath + image_name;
  435. QByteArray ba = imagePath.toLatin1();
  436. imageName_c_str = ba.data();
  437. FILE *fp = fopen(imageName_c_str, "wb");
  438. fwrite(m_pcMyCamera[i]->m_pBufForSaveImage, 1, stParam.nImageLen,
  439. fp);
  440. fclose(fp);
  441. // ui->lbl_camera_R->setPixmap(QPixmap(image_name));
  442. // ui->lbl_camera_R->setScaledContents(true);
  443. }
  444. }
  445. }
  446. void Widget::on_le_set_exposure_textChanged(const QString &arg1)
  447. {
  448. //设置曝光时间
  449. QString str = ui->le_set_exposure->text(); // 读取
  450. int exposure_Time = str.toInt();
  451. for (int i = 0; i < devices_num; i++) {
  452. m_pcMyCamera[i]->SetEnumValue("ExposureAuto",
  453. MV_EXPOSURE_AUTO_MODE_OFF);
  454. m_pcMyCamera[i]->SetFloatValue("ExposureTime", exposure_Time);
  455. }
  456. }
  457. void Widget::on_le_set_gain_textChanged(const QString &arg1)
  458. {
  459. QString str = ui->le_set_gain->text(); // 读取
  460. float gain = str.toFloat();
  461. for (int i = 0; i < devices_num; i++) {
  462. m_pcMyCamera[i]->SetEnumValue("GainAuto", 0);
  463. m_pcMyCamera[i]->SetFloatValue("Gain", gain);
  464. }
  465. }
  466. void Widget::on_pbn_return_main_clicked()
  467. {
  468. //发送信号实现页面切换
  469. //connect(ui->pbn_return_main,SIGNAL(clicked()),this,SIGNAL(back()));
  470. }

5)yolov5.cpp

  1. #include"yolov5.h"
  2. using namespace std;
  3. using namespace cv;
  4. using namespace cv::dnn;
  5. bool YoloV5::readModel(Net &net, string &netPath,bool isCuda = false) {
  6. try {
  7. net = readNetFromONNX(netPath);
  8. }
  9. catch (const std::exception&) {
  10. return false;
  11. }
  12. //cuda
  13. if (isCuda) {
  14. net.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA);
  15. net.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA);
  16. }
  17. //cpu
  18. else {
  19. net.setPreferableBackend(cv::dnn::DNN_BACKEND_DEFAULT);
  20. net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);
  21. }
  22. return true;
  23. }
  24. bool YoloV5::Detect(Mat &SrcImg,Net &net,vector<Output> &output) {
  25. Mat blob;
  26. int col = SrcImg.cols;
  27. int row = SrcImg.rows;
  28. int maxLen = MAX(col, row);
  29. Mat netInputImg = SrcImg.clone();
  30. if (maxLen > 1.2*col || maxLen > 1.2*row) {
  31. Mat resizeImg = Mat::zeros(maxLen, maxLen, CV_8UC3);
  32. SrcImg.copyTo(resizeImg(Rect(0, 0, col, row)));
  33. netInputImg = resizeImg;
  34. }
  35. blobFromImage(netInputImg, blob, 1 / 255.0, cv::Size(netWidth, netHeight), cv::Scalar(104, 117,123), true, false);
  36. //blobFromImage(netInputImg, blob, 1 / 255.0, cv::Size(netWidth, netHeight), cv::Scalar(0, 0,0), true, false);//如果训练集未对图片进行减去均值操作,则需要设置为这句
  37. //blobFromImage(netInputImg, blob, 1 / 255.0, cv::Size(netWidth, netHeight), cv::Scalar(114, 114,114), true, false);
  38. net.setInput(blob);
  39. std::vector<cv::Mat> netOutputImg;
  40. // vector<string> outputLayerName{"652","669", "686","output" };
  41. // net.forward(netOutputImg, outputLayerName[2]); //获取output的输出
  42. net.forward(netOutputImg, net.getUnconnectedOutLayersNames());
  43. //接上面
  44. std::vector<int> classIds;//结果id数组
  45. std::vector<float> confidences;//结果每个id对应置信度数组
  46. std::vector<cv::Rect> boxes;//每个id矩形框
  47. float ratio_h = (float)netInputImg.rows / netHeight;
  48. float ratio_w = (float)netInputImg.cols / netWidth;
  49. int net_width = className.size() + 5; //输出的网络宽度是类别数+5
  50. float* pdata = (float*)netOutputImg[0].data;
  51. for (int stride = 0; stride < 3; stride++) { //stride
  52. int grid_x = (int)(netWidth / netStride[stride]);
  53. int grid_y = (int)(netHeight / netStride[stride]);
  54. for (int anchor = 0; anchor < 3; anchor++) { //anchors
  55. const float anchor_w = netAnchors[stride][anchor * 2];
  56. const float anchor_h = netAnchors[stride][anchor * 2 + 1];
  57. for (int i = 0; i < grid_y; i++) {
  58. for (int j = 0; j < grid_y; j++) {
  59. float box_score = Sigmoid(pdata[4]);//获取每一行的box框中含有某个物体的概率
  60. if (box_score > boxThreshold) {
  61. //为了使用minMaxLoc(),将85长度数组变成Mat对象
  62. cv::Mat scores(1,className.size(), CV_32FC1, pdata+5);
  63. Point classIdPoint;
  64. double max_class_socre;
  65. minMaxLoc(scores, 0, &max_class_socre, 0, &classIdPoint);
  66. max_class_socre = Sigmoid((float)max_class_socre);
  67. if (max_class_socre > classThreshold) {
  68. //rect [x,y,w,h]
  69. float x = (Sigmoid(pdata[0]) * 2.f - 0.5f + j) * netStride[stride]; //x
  70. float y = (Sigmoid(pdata[1]) * 2.f - 0.5f + i) * netStride[stride]; //y
  71. float w = powf(Sigmoid(pdata[2]) * 2.f, 2.f) * anchor_w; //w
  72. float h = powf(Sigmoid(pdata[3]) * 2.f, 2.f) * anchor_h; //h
  73. int left = (x - 0.5*w)*ratio_w;
  74. int top = (y - 0.5*h)*ratio_h;
  75. classIds.push_back(classIdPoint.x);
  76. confidences.push_back(max_class_socre*box_score);
  77. boxes.push_back(Rect(left, top, int(w*ratio_w), int(h*ratio_h)));
  78. }
  79. }
  80. pdata += net_width;//指针移到下一行
  81. }
  82. }
  83. }
  84. }
  85. vector<int> nms_result;
  86. NMSBoxes(boxes, confidences, classThreshold, nmsThreshold, nms_result);
  87. for (int i = 0; i < nms_result.size(); i++) {
  88. int idx = nms_result[i];
  89. Output result;
  90. result.id = classIds[idx];
  91. result.confidence = confidences[idx];
  92. result.box = boxes[idx];
  93. output.push_back(result);
  94. }
  95. if (output.size())
  96. return true;
  97. else
  98. return false;
  99. }
  100. //这里的color是颜色数组,对没一个id随机分配一种颜色
  101. void YoloV5::drawPred(Mat &img, vector<Output> result, vector<Scalar> color) {
  102. for (int i = 0; i < result.size(); i++) {
  103. int left, top;
  104. left = result[i].box.x;
  105. top = result[i].box.y;
  106. int color_num = i;
  107. //rectangle(img, result[i].box, color[result[i].id], 2, 8);
  108. rectangle(img, result[i].box, Scalar(255, 0, 0), 2, 8);
  109. string label = className[result[i].id] +":" + to_string(result[i].confidence);
  110. int baseLine;
  111. Size labelSize = getTextSize(label, FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);
  112. top = max(top, labelSize.height);
  113. //rectangle(frame, Point(left, top - int(1.5 * labelSize.height)), Point(left + int(1.5 * labelSize.width), top + baseLine), Scalar(0, 255, 0), FILLED);
  114. //putText(img, label, Point(left, top), FONT_HERSHEY_SIMPLEX, 1, color[result[i].id], 2);
  115. putText(img, label, Point(30, 100), FONT_HERSHEY_SIMPLEX, 2.5, Scalar(255, 0, 0), 2);
  116. }
  117. }
  118. void getTime(QString msg)
  119. {
  120. auto start = std::chrono::steady_clock::now();
  121. auto end = std::chrono::steady_clock::now();
  122. std::chrono::duration<double, std::milli> elapsed = end - start;
  123. QString output = msg.arg(elapsed.count());
  124. string out = output.toStdString();
  125. cout<<out<<endl;
  126. }

这里需要讲一下,如果如前面所示你用的单一输出,如下图

要用 

outputLayerName来获取数组,如作者的onnx模型,名称为output,那就获取整个output的权重,数据,如果导出格式如下图所示:

那么你就要用

以上的forword函数来获取权重文件,其他地方基本无需改动,当然肯定部署会出现各种报错,在你读懂了代码之后,基本上都能改好

写在最后:如果想要qt源码文件,可以进入以下链接,创作不易,谢谢大家支持

qt源码文件

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小丑西瓜9/article/detail/553123
推荐阅读
相关标签
  

闽ICP备14008679号