赞
踩
[1] CUDA介绍及入门
[2] CUDA程序结构+在设备上执行线程
[3]在CUDA程序中获取GPU属性
[4]CUDA中的向量计算与并行通信模式
[5] CUDA线程调用与存储器架构
[6] CUDA之线程同步
[7] CUDA之常量内存与纹理内存
[8] CUDA之向量点乘和矩阵乘法
[9] CUDA性能测量与错误处理
[10] CUDA程序性能的提升 与 流
[11] 使用 CUDA 加速排序算法
[12] 使用 CUDA 进行图像处理
[13] CUDA_Opencv联合编译过程
[14] CUDA_使用Opencv处理图像
[15] 使用Opencv_CUDA 模块实现基本计算机视觉程序
[16] 使用Opencv_CUDA 实现访问图像像素、直方图均衡化、几何变换
[17] 使用Opencv_CUDA 进行滤波操作
[18] Opencv_CUDA应用之 基于颜色的对象检测与跟踪
[19] Opencv_CUDA应用之 基于形状的对象检测与跟踪
[20] Opencv_CUDA应用之 关键点检测器和描述符
[21] Opencv_CUDA应用之使用Haar级联的对象检测
[22] Opencv_CUDA应用之 使用背景相减法进行对象跟踪
1. Opencv - DNN模块实现深度学习应用:分类、检测、分割、跟踪
#可通过git下载拉取
git clone https://github.com/opencv/opencv_extra.git
开源Opencv+contrib - 455:https://gitcode.net/openmodel/opencv
#ifndef OPENCV_DNN_HPP #define OPENCV_DNN_HPP // This is an umbrella header to include into you project. // We are free to change headers layout in dnn subfolder, so please include // this header for future compatibility /** @defgroup dnn Deep Neural Network module @{ This module contains: - API for new layers creation, layers are building bricks of neural networks; - set of built-in most-useful Layers; - API to construct and modify comprehensive neural networks from layers; - functionality for loading serialized networks models from different frameworks. Functionality of this module is designed only for forward pass computations (i.e. network testing). A network training is in principle not supported. @} */ /** @example samples/dnn/classification.cpp Check @ref tutorial_dnn_googlenet "the corresponding tutorial" for more details */ /** @example samples/dnn/colorization.cpp */ /** @example samples/dnn/object_detection.cpp Check @ref tutorial_dnn_yolo "the corresponding tutorial" for more details */ /** @example samples/dnn/openpose.cpp */ /** @example samples/dnn/segmentation.cpp */ /** @example samples/dnn/text_detection.cpp */ #include <opencv2/dnn/dnn.hpp> #endif /* OPENCV_DNN_HPP */
.\opencv-contrib\include\opencv2\dnn\dnn.cpp
中可以找到//其中包括 /** * @brief Enum of computation backends supported by layers. * @see Net::setPreferableBackend */ enum Backend { //! DNN_BACKEND_DEFAULT equals to DNN_BACKEND_INFERENCE_ENGINE if //! OpenCV is built with Intel's Inference Engine library or //! DNN_BACKEND_OPENCV otherwise. DNN_BACKEND_DEFAULT = 0, DNN_BACKEND_HALIDE, DNN_BACKEND_INFERENCE_ENGINE, //!< Intel's Inference Engine computational backend //!< @sa setInferenceEngineBackendType DNN_BACKEND_OPENCV, DNN_BACKEND_VKCOM, DNN_BACKEND_CUDA, DNN_BACKEND_WEBNN, #ifdef __OPENCV_BUILD DNN_BACKEND_INFERENCE_ENGINE_NGRAPH = 1000000, // internal - use DNN_BACKEND_INFERENCE_ENGINE + setInferenceEngineBackendType() DNN_BACKEND_INFERENCE_ENGINE_NN_BUILDER_2019, // internal - use DNN_BACKEND_INFERENCE_ENGINE + setInferenceEngineBackendType() #endif }; /** * @brief Enum of target devices for computations. * @see Net::setPreferableTarget */ enum Target { DNN_TARGET_CPU = 0, DNN_TARGET_OPENCL, DNN_TARGET_OPENCL_FP16, DNN_TARGET_MYRIAD, DNN_TARGET_VULKAN, DNN_TARGET_FPGA, //!< FPGA device with CPU fallbacks using Inference Engine's Heterogeneous plugin. DNN_TARGET_CUDA, DNN_TARGET_CUDA_FP16, DNN_TARGET_HDDL };
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。