当前位置:   article > 正文

关于 Intel Realsense 深度图像处理.1(C++)_realsense c++

realsense c++

 Realsense SDK2.0 + C++

rs-hello-realsense

rs-hello-realsense‎示例演示了连接到英特尔实感设备以及通过打印到摄像头视野中心物体的距离来利用深度数据的基础知识。

‎假设相机已连接,您应该看到线路不断更新。X是相机视野中心物体的距离(以米为单位)。

‎除高级功能外,所有功能都通过单个标头提供:‎

#include <librealsense2/rs.hpp> // Include Intel RealSense Cross Platform API

‎接下来,我们创建并启动实感管道。管道是控制相机枚举和流式处理的主要高级基元。‎

  1. // Create a Pipeline - this serves as a top-level API for streaming and processing frames
  2. rs2::pipeline p;
  3. // Configure and start the pipeline
  4. p.start();

‎配置管道后,我们可以循环等待新帧。‎

‎英特尔实感摄像头通常提供多个视频、动作或姿势流。‎
‎函数将阻塞,直到来自各种配置流的下一组连贯帧
。‎wait_for_frames

  1. // Block program until frames arrive
  2. rs2::frameset frames = p.wait_for_frames();

‎要从深度数据流中获取第一帧,您可以使用帮助程序函数:‎get_depth_frame

  1. // Try to get a frame of a depth image
  2. rs2::depth_frame depth = frames.get_depth_frame();

‎接下来,我们查询默认的深度框架尺寸(这些尺寸可能因传感器而异):‎

  1. // Get the depth frame's dimensions
  2. float width = depth.get_width();
  3. float height = depth.get_height();

‎要获取特定像素(帧中心)的距离,您可以使用函数:‎get_distance

  1. // Query the distance from the camera to the object in the center of the image
  2. float dist_to_center = depth.get_distance(width / 2, height / 2);

‎剩下的唯一事情就是打印到屏幕的结果距离:‎

  1. // Print the distance
  2. std::cout << "The camera is facing an object " << dist_to_center << " meters away \r";

rs-imshow

该示例将打开一个 OpenCV UI 窗口,并向其呈现彩色深度流:

  1. // Query frame size (width and height)
  2. const int w = depth.as<rs2::video_frame>().get_width();
  3. const int h = depth.as<rs2::video_frame>().get_height();
  4. // Create OpenCV matrix of size (w,h) from the colorized depth data
  5. Mat image(Size(w, h), CV_8UC3, (void*)depth.get_data(), Mat::AUTO_STEP);

rs-capture

‎将深度和RGB数据流式传输和渲染到屏幕

‎除高级功能外,所有功能都通过单个标头提供:

#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API

‎接下来,我们包括一个非常短的帮助器库来封装OpenGL渲染和窗口管理:‎

#include "example.hpp"          // Include a short list of convenience functions for rendering

‎此标题让我们可以轻松打开一个新窗口并准备用于渲染的纹理。‎
‎纹理类旨在保存视频帧数据以进行渲染。‎

  1. // Create a simple OpenGL window for rendering:
  2. window app(1280, 720, "RealSense Capture Example");
  3. // Declare two textures on the GPU, one for depth and one for color
  4. texture depth_image, color_image;

‎深度数据通常以12位灰度提供,这对于可视化不是很有用。‎
‎为了增强可视化效果,我们提供了一个将灰度图像转换为 RGB 的 API:‎

  1. // Declare depth colorizer for enhanced color visualization of depth data
  2. rs2::colorizer color_map;

‎SDK API 入口点是管道类:‎

  1. // Declare the RealSense pipeline, encapsulating the actual device and sensors
  2. rs2::pipeline pipe;
  3. // Start streaming with the default recommended configuration
  4. pipe.start();

‎接下来,我们等待下一组帧,有效地阻止了程序:‎

rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera

‎使用对象,我们找到集合中的第一个深度帧和第一个颜色帧:‎frameset

  1. rs2::frame depth = color_map(data.get_depth_frame()); // Find and colorize the depth data
  2. rs2::frame color = data.get_color_frame(); // Find the color data

‎最后,深度和色彩渲染由 ‎‎example.hpp‎‎ 中的纹理类实现。‎

  1. // Render depth on to the first half of the screen and color on to the second
  2. depth_image.render(depth, { 0, 0, app.width() / 2, app.height() });
  3. color_image.render(color, { app.width() / 2, 0, app.width() / 2, app.height() });

rs-depth

GitHub 上的源代码 ( 代码太长了,我只能慢慢啃)

  1. #include <librealsense2/rs.h>
  2. #include <librealsense2/h/rs_pipeline.h>
  3. #include <librealsense2/h/rs_option.h>
  4. #include <librealsense2/h/rs_frame.h>
  5. #include "example.h"
  6. #include <stdlib.h>
  7. #include <stdint.h>
  8. #include <stdio.h>
  9. // These parameters are reconfigurable //
  10. #define STREAM RS2_STREAM_DEPTH // rs2_stream is a types of data provided by RealSense device //
  11. #define FORMAT RS2_FORMAT_Z16 // rs2_format identifies how binary data is encoded within a frame //
  12. #define WIDTH 640 // Defines the number of columns for each frame or zero for auto resolve//
  13. #define HEIGHT 0 // Defines the number of lines for each frame or zero for auto resolve //
  14. #define FPS 30 // Defines the rate of frames per second //
  15. #define STREAM_INDEX 0 // Defines the stream index, used for multiple streams of the same type //
  16. #define HEIGHT_RATIO 20 // Defines the height ratio between the original frame to the new frame //
  17. #define WIDTH_RATIO 10 // Defines the width ratio between the original frame to the new frame //
  18. // The number of meters represented by a single depth unit
  19. float get_depth_unit_value(const rs2_device* const dev)
  20. {
  21. rs2_error* e = 0;
  22. rs2_sensor_list* sensor_list = rs2_query_sensors(dev, &e);
  23. check_error(e);
  24. int num_of_sensors = rs2_get_sensors_count(sensor_list, &e);
  25. check_error(e);
  26. float depth_scale = 0;
  27. int is_depth_sensor_found = 0;
  28. int i;
  29. for (i = 0; i < num_of_sensors; ++i)
  30. {
  31. rs2_sensor* sensor = rs2_create_sensor(sensor_list, i, &e);
  32. check_error(e);
  33. // Check if the given sensor can be extended to depth sensor interface
  34. is_depth_sensor_found = rs2_is_sensor_extendable_to(sensor, RS2_EXTENSION_DEPTH_SENSOR, &e);
  35. check_error(e);
  36. if (1 == is_depth_sensor_found)
  37. {
  38. depth_scale = rs2_get_option((const rs2_options*)sensor, RS2_OPTION_DEPTH_UNITS, &e);
  39. check_error(e);
  40. rs2_delete_sensor(sensor);
  41. break;
  42. }
  43. rs2_delete_sensor(sensor);
  44. }
  45. rs2_delete_sensor_list(sensor_list);
  46. if (0 == is_depth_sensor_found)
  47. {
  48. printf("Depth sensor not found!\n");
  49. exit(EXIT_FAILURE);
  50. }
  51. return depth_scale;
  52. }
  53. int main()
  54. {
  55. rs2_error* e = 0;
  56. // Create a context object. This object owns the handles to all connected realsense devices.
  57. // The returned object should be released with rs2_delete_context(...)
  58. rs2_context* ctx = rs2_create_context(RS2_API_VERSION, &e);
  59. check_error(e);
  60. /* Get a list of all the connected devices. */
  61. // The returned object should be released with rs2_delete_device_list(...)
  62. rs2_device_list* device_list = rs2_query_devices(ctx, &e);
  63. check_error(e);
  64. int dev_count = rs2_get_device_count(device_list, &e);
  65. check_error(e);
  66. printf("There are %d connected RealSense devices.\n", dev_count);
  67. if (0 == dev_count)
  68. return EXIT_FAILURE;
  69. // Get the first connected device
  70. // The returned object should be released with rs2_delete_device(...)
  71. rs2_device* dev = rs2_create_device(device_list, 0, &e);
  72. check_error(e);
  73. print_device_info(dev);
  74. /* Determine depth value corresponding to one meter */
  75. uint16_t one_meter = (uint16_t)(1.0f / get_depth_unit_value(dev));
  76. // Create a pipeline to configure, start and stop camera streaming
  77. // The returned object should be released with rs2_delete_pipeline(...)
  78. rs2_pipeline* pipeline = rs2_create_pipeline(ctx, &e);
  79. check_error(e);
  80. // Create a config instance, used to specify hardware configuration
  81. // The retunred object should be released with rs2_delete_config(...)
  82. rs2_config* config = rs2_create_config(&e);
  83. check_error(e);
  84. // Request a specific configuration
  85. rs2_config_enable_stream(config, STREAM, STREAM_INDEX, WIDTH, HEIGHT, FORMAT, FPS, &e);
  86. check_error(e);
  87. // Start the pipeline streaming
  88. // The retunred object should be released with rs2_delete_pipeline_profile(...)
  89. rs2_pipeline_profile* pipeline_profile = rs2_pipeline_start_with_config(pipeline, config, &e);
  90. if (e)
  91. {
  92. printf("The connected device doesn't support depth streaming!\n");
  93. exit(EXIT_FAILURE);
  94. }
  95. rs2_stream_profile_list* stream_profile_list = rs2_pipeline_profile_get_streams(pipeline_profile, &e);
  96. if (e)
  97. {
  98. printf("Failed to create stream profile list!\n");
  99. exit(EXIT_FAILURE);
  100. }
  101. rs2_stream_profile* stream_profile = (rs2_stream_profile*)rs2_get_stream_profile(stream_profile_list, 0, &e);
  102. if (e)
  103. {
  104. printf("Failed to create stream profile!\n");
  105. exit(EXIT_FAILURE);
  106. }
  107. rs2_stream stream; rs2_format format; int index; int unique_id; int framerate;
  108. rs2_get_stream_profile_data(stream_profile, &stream, &format, &index, &unique_id, &framerate, &e);
  109. if (e)
  110. {
  111. printf("Failed to get stream profile data!\n");
  112. exit(EXIT_FAILURE);
  113. }
  114. int width; int height;
  115. rs2_get_video_stream_resolution(stream_profile, &width, &height, &e);
  116. if (e)
  117. {
  118. printf("Failed to get video stream resolution data!\n");
  119. exit(EXIT_FAILURE);
  120. }
  121. int rows = height / HEIGHT_RATIO;
  122. int row_length = width / WIDTH_RATIO;
  123. int display_size = (rows + 1) * (row_length + 1);
  124. int buffer_size = display_size * sizeof(char);
  125. char* buffer = calloc(display_size, sizeof(char));
  126. char* out = NULL;
  127. while (1)
  128. {
  129. // This call waits until a new composite_frame is available
  130. // composite_frame holds a set of frames. It is used to prevent frame drops
  131. // The returned object should be released with rs2_release_frame(...)
  132. rs2_frame* frames = rs2_pipeline_wait_for_frames(pipeline, RS2_DEFAULT_TIMEOUT, &e);
  133. check_error(e);
  134. // Returns the number of frames embedded within the composite frame
  135. int num_of_frames = rs2_embedded_frames_count(frames, &e);
  136. check_error(e);
  137. int i;
  138. for (i = 0; i < num_of_frames; ++i)
  139. {
  140. // The retunred object should be released with rs2_release_frame(...)
  141. rs2_frame* frame = rs2_extract_frame(frames, i, &e);
  142. check_error(e);
  143. // Check if the given frame can be extended to depth frame interface
  144. // Accept only depth frames and skip other frames
  145. if (0 == rs2_is_frame_extendable_to(frame, RS2_EXTENSION_DEPTH_FRAME, &e))
  146. {
  147. rs2_release_frame(frame);
  148. continue;
  149. }
  150. /* Retrieve depth data, configured as 16-bit depth values */
  151. const uint16_t* depth_frame_data = (const uint16_t*)(rs2_get_frame_data(frame, &e));
  152. check_error(e);
  153. /* Print a simple text-based representation of the image, by breaking it into 10x5 pixel regions and approximating the coverage of pixels within one meter */
  154. out = buffer;
  155. int x, y, i;
  156. int* coverage = calloc(row_length, sizeof(int));
  157. for (y = 0; y < height; ++y)
  158. {
  159. for (x = 0; x < width; ++x)
  160. {
  161. // Create a depth histogram to each row
  162. int coverage_index = x / WIDTH_RATIO;
  163. int depth = *depth_frame_data++;
  164. if (depth > 0 && depth < one_meter)
  165. ++coverage[coverage_index];
  166. }
  167. if ((y % HEIGHT_RATIO) == (HEIGHT_RATIO-1))
  168. {
  169. for (i = 0; i < (row_length); ++i)
  170. {
  171. static const char* pixels = " .:nhBXWW";
  172. int pixel_index = (coverage[i] / (HEIGHT_RATIO * WIDTH_RATIO / sizeof(pixels)));
  173. *out++ = pixels[pixel_index];
  174. coverage[i] = 0;
  175. }
  176. *out++ = '\n';
  177. }
  178. }
  179. *out++ = 0;
  180. printf("\n%s", buffer);
  181. free(coverage);
  182. rs2_release_frame(frame);
  183. }
  184. rs2_release_frame(frames);
  185. }
  186. // Stop the pipeline streaming
  187. rs2_pipeline_stop(pipeline, &e);
  188. check_error(e);
  189. // Release resources
  190. free(buffer);
  191. rs2_delete_pipeline_profile(pipeline_profile);
  192. rs2_delete_stream_profiles_list(stream_profile_list);
  193. rs2_delete_stream_profile(stream_profile);
  194. rs2_delete_config(config);
  195. rs2_delete_pipeline(pipeline);
  196. rs2_delete_device(dev);
  197. rs2_delete_device_list(device_list);
  198. rs2_delete_context(ctx);
  199. return EXIT_SUCCESS;
  200. }

rs-pointcloud

‎生成和可视化纹理 3D 点云‎

‎应用程序应打开一个带有点云的窗口。使用鼠标,您应该能够与点云交互、旋转、缩放和平移。‎

包含跨平台 API:‎

#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API

‎接下来,我们准备了一‎‎个非常简短的帮助器库‎‎,封装了基本的OpenGL渲染和窗口管理:‎

#include "example.hpp"          // Include short list of convenience functions for rendering

‎我们还包含 和 的 STL 标头。‎<algorithm>std::minstd::max

‎接下来,我们定义一个结构和两个帮助程序函数。 并在应用程序中处理点云的旋转,并进行显示点云所需的所有 OpenGL 调用。‎statestateregister_glfw_callbacksdraw_pointcloud

  1. // Struct for managing rotation of pointcloud view
  2. struct state { double yaw, pitch, last_x, last_y; bool ml; float offset_x, offset_y; texture tex; };
  3. // Helper functions
  4. void register_glfw_callbacks(window& app, state& app_state);
  5. void draw_pointcloud(window& app, state& app_state, rs2::points& points);

‎标题让我们可以轻松打开一个新窗口并准备用于渲染的纹理。该类(在上面声明)用于与鼠标交互,借助于通过 glfw 注册的一些回调。‎example.hppstate

  1. // Create a simple OpenGL window for rendering:
  2. window app(1280, 720, "RealSense Pointcloud Example");
  3. // Construct an object to manage view state
  4. state app_state = { 0, 0, 0, 0, false, 0, 0, 0 };
  5. // register callbacks to allow manipulation of the pointcloud
  6. register_glfw_callbacks(app, app_state);

‎我们将在命名空间中使用类:‎rs2

using namespace rs2;

‎作为API的一部分,我们提供的类从深度和颜色帧计算点云和相应的纹理映射。为了确保我们始终有要显示的内容,我们还制作了一个对象来存储点云计算的结果。‎pointcloudrs2::points

  1. // Declare pointcloud object, for calculating pointclouds and texture mappings
  2. pointcloud pc = rs2::context().create_pointcloud();
  3. // We want the points object to be persistent so we can display the last cloud when a frame drops
  4. rs2::points points;

‎该类是 SDK 功能的入口点:‎Pipeline

  1. // Declare RealSense pipeline, encapsulating the actual device and sensors
  2. pipeline pipe;
  3. // Start streaming with default recommended configuration
  4. pipe.start();

‎接下来,我们在循环单元中等待下一组帧:‎

auto data = pipe.wait_for_frames(); // Wait for next set of frames from the camera

在对象上使用帮助程序函数,我们检查新的深度和颜色帧。我们将其传递给对象以用作纹理,并在类的帮助下将其提供给OpenGL。我们生成一个新的点云。‎framesetpointcloudtexture

  1. // Wait for the next set of frames from the camera
  2. auto frames = pipe.wait_for_frames();
  3. auto depth = frames.get_depth_frame();
  4. // Generate the pointcloud and texture mappings
  5. points = pc.calculate(depth);
  6. auto color = frames.get_color_frame();
  7. // Tell pointcloud object to map to this color frame
  8. pc.map_to(color);
  9. // Upload the color frame to OpenGL
  10. app_state.tex.upload(color);

‎最后,我们调用来绘制点云。‎draw_pointcloud

draw_pointcloud(app, app_state, points);

draw_pointcloud‎主要是对OpenGL的调用,但关键部分迭代点云中的所有点,如果我们有深度数据,我们将点的坐标和纹理映射坐标上传到OpenGL。‎

  1. /* this segment actually prints the pointcloud */
  2. auto vertices = points.get_vertices(); // get vertices
  3. auto tex_coords = points.get_texture_coordinates(); // and texture coordinates
  4. for (int i = 0; i < points.size(); i++)
  5. {
  6. if (vertices[i].z)
  7. {
  8. // upload the point and texture coordinates only for points we have depth data for
  9. glVertex3fv(vertices[i]);
  10. glTexCoord2fv(tex_coords[i]);
  11. }
  12. }

OpenGL.cpp

  1. #include <iostream>
  2. #ifdef _WIN32
  3. #define WIN32_LEAN_AND_MEAN 1
  4. #define NOMINMAX 1
  5. #include <windows.h>
  6. #endif
  7. #if defined(__APPLE__)
  8. #include <OpenGL/gl.h>
  9. #include <OpenGL/glu.h>
  10. #else
  11. #include <GL/gl.h>
  12. #include <GL/glu.h>
  13. #endif
  14. #include "opencv2/core.hpp"
  15. #include "opencv2/core/opengl.hpp"
  16. #include "opencv2/core/cuda.hpp"
  17. #include "opencv2/highgui.hpp"
  18. using namespace std;
  19. using namespace cv;
  20. using namespace cv::cuda;
  21. const int win_width = 800;
  22. const int win_height = 640;
  23. struct DrawData
  24. {
  25. ogl::Arrays arr;
  26. ogl::Texture2D tex;
  27. ogl::Buffer indices;
  28. };
  29. void draw(void* userdata);
  30. void draw(void* userdata)
  31. {
  32. DrawData* data = static_cast<DrawData*>(userdata);
  33. glRotated(0.6, 0, 1, 0);
  34. ogl::render(data->arr, data->indices, ogl::TRIANGLES);
  35. }
  36. int main(int argc, char* argv[])
  37. {
  38. string filename;
  39. if (argc < 2)
  40. {
  41. cout << "Usage: " << argv[0] << " image" << endl;
  42. filename = "../data/lena.jpg";
  43. }
  44. else
  45. filename = argv[1];
  46. Mat img = imread(filename);
  47. if (img.empty())
  48. {
  49. cerr << "Can't open image " << filename << endl;
  50. return -1;
  51. }
  52. namedWindow("OpenGL", WINDOW_OPENGL);
  53. resizeWindow("OpenGL", win_width, win_height);
  54. Mat_<Vec2f> vertex(1, 4);
  55. vertex << Vec2f(-1, 1), Vec2f(-1, -1), Vec2f(1, -1), Vec2f(1, 1);
  56. Mat_<Vec2f> texCoords(1, 4);
  57. texCoords << Vec2f(0, 0), Vec2f(0, 1), Vec2f(1, 1), Vec2f(1, 0);
  58. Mat_<int> indices(1, 6);
  59. indices << 0, 1, 2, 2, 3, 0;
  60. DrawData data;
  61. data.arr.setVertexArray(vertex);
  62. data.arr.setTexCoordArray(texCoords);
  63. data.indices.copyFrom(indices);
  64. data.tex.copyFrom(img);
  65. glMatrixMode(GL_PROJECTION);
  66. glLoadIdentity();
  67. gluPerspective(45.0, (double)win_width / win_height, 0.1, 100.0);
  68. glMatrixMode(GL_MODELVIEW);
  69. glLoadIdentity();
  70. gluLookAt(0, 0, 3, 0, 0, 0, 0, 1, 0);
  71. glEnable(GL_TEXTURE_2D);
  72. data.tex.bind();
  73. glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
  74. glTexEnvi(GL_TEXTURE_2D, GL_TEXTURE_ENV_MODE, GL_REPLACE);
  75. glDisable(GL_CULL_FACE);
  76. setOpenGlDrawCallback("OpenGL", draw, &data);
  77. for (;;)
  78. {
  79. updateWindow("OpenGL");
  80. char key = (char)waitKey(40);
  81. if (key == 27)
  82. break;
  83. }
  84. setOpenGlDrawCallback("OpenGL", 0, 0);
  85. destroyAllWindows();
  86. return 0;
  87. }

Realsense SDK 2.0 + OpenCV 

用于可视化深度数据的最小 OpenCV 应用程序

下面的示例将打开一个 OpenCV UI 窗口,并向其呈现彩色深度流。The following code snippet is used to create 

cv::Mat

 from

 rs2::frame
  1. // Query frame size (width and height)
  2. const int w = depth.as<rs2::video_frame>().get_width();
  3. const int h = depth.as<rs2::video_frame>().get_height();
  4. // Create OpenCV matrix of size (w,h) from the colorized depth data
  5. Mat image(Size(w, h), CV_8UC3, (void*)depth.get_data(), Mat::AUTO_STEP);

GitHub上面的源代码

  1. #include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
  2. #include <opencv2/opencv.hpp> // Include OpenCV API
  3. int main(int argc, char * argv[]) try
  4. {
  5. // Declare depth colorizer for pretty visualization of depth data
  6. rs2::colorizer color_map;
  7. // Declare RealSense pipeline, encapsulating the actual device and sensors
  8. rs2::pipeline pipe;
  9. // Start streaming with default recommended configuration
  10. pipe.start();
  11. using namespace cv;
  12. const auto window_name = "Display Image";
  13. namedWindow(window_name, WINDOW_AUTOSIZE);
  14. while (waitKey(1) < 0 && getWindowProperty(window_name, WND_PROP_AUTOSIZE) >= 0)
  15. {
  16. rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
  17. rs2::frame depth = data.get_depth_frame().apply_filter(color_map);
  18. // Query frame size (width and height)
  19. const int w = depth.as<rs2::video_frame>().get_width();
  20. const int h = depth.as<rs2::video_frame>().get_height();
  21. // Create OpenCV matrix of size (w,h) from the colorized depth data
  22. Mat image(Size(w, h), CV_8UC3, (void*)depth.get_data(), Mat::AUTO_STEP);
  23. // Update the window with new data
  24. imshow(window_name, image);
  25. }
  26. return EXIT_SUCCESS;
  27. }
  28. catch (const rs2::error & e)
  29. {
  30. std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
  31. return EXIT_FAILURE;
  32. }
  33. catch (const std::exception& e)
  34. {
  35. std::cerr << e.what() << std::endl;
  36. return EXIT_FAILURE;
  37. }

使用GrabCut算法进行简单的背景删除

GrabCut算法

‎GrabCuts ‎‎示例演示了如何使用 3D 数据增强现有的 2D 算法:‎‎GrabCut 算法‎‎通常用于交互式、用户辅助的前景提取。

该算法详细讲解的链接:(129条消息) 图像分割之(三)从Graph Cut到Grab Cut_zouxy09的专栏-CSDN博客_graphcut图像分割

简单来说,GrabCut算法就是利用了图像中的纹理(颜色)信息和边界(反差)信息,只要少量的用户交互操作即可得到比较好的分割结果.

将用户输入替换为基于深度数据的初始猜测

获得对齐的颜色和深度

首先获取一对空间和时间同步的帧:‎

  1. frameset data = pipe.wait_for_frames();
  2. // Make sure the frameset is spatialy aligned
  3. // (each pixel in depth image corresponds to the same pixel in the color image)
  4. frameset aligned_set = align_to.process(data);
  5. frame depth = aligned_set.get_depth_frame();
  6. auto color_mat = frame_to_mat(aligned_set.get_color_frame());

‎生成近/远掩码

继续生成像素区域,以估计近处和远处的物体。我们使用基本的‎‎形态变换‎‎来提高两个掩模的质量:

  1. // Generate "near" mask image:
  2. auto near = frame_to_mat(bw_depth);
  3. cvtColor(near, near, CV_BGR2GRAY);
  4. // Take just values within range [180-255]
  5. // These will roughly correspond to near objects due to histogram equalization
  6. create_mask_from_depth(near, 180, THRESH_BINARY);
  7. // Generate "far" mask image:
  8. auto far = frame_to_mat(bw_depth);
  9. cvtColor(far, far, CV_BGR2GRAY);
  10. // Note: 0 value does not indicate pixel near the camera, and requires special attention:
  11. far.setTo(255, far == 0);
  12. create_mask_from_depth(far, 100, THRESH_BINARY_INV);

调用 cv::GrabCut 算法‎

‎这两个掩码组合成一个猜测:‎

  1. // GrabCut algorithm needs a mask with every pixel marked as either:
  2. // BGD, FGB, PR_BGD, PR_FGB
  3. Mat mask;
  4. mask.create(near.size(), CV_8UC1);
  5. mask.setTo(Scalar::all(GC_BGD)); // Set "background" as default guess
  6. mask.setTo(GC_PR_BGD, far == 0); // Relax this to "probably background" for pixels outside "far" region
  7. mask.setTo(GC_FGD, near == 255); // Set pixels within the "near" region to "foreground"

运行算法:

  1. Mat bgModel, fgModel;
  2. cv::grabCut(color_mat, mask, Rect(), bgModel, fgModel, 1, cv::GC_INIT_WITH_MASK);

‎并生成生成的图像

  1. // Extract foreground pixels based on refined mask from the algorithm
  2. cv::Mat3b foreground = cv::Mat3b::zeros(color_mat.rows, color_mat.cols);
  3. color_mat.copyTo(foreground, (mask == cv::GC_FGD) | (mask == cv::GC_PR_FGD));
  4. cv::imshow(window_name, foreground);

狗图:

GitHub上面的源代码

  1. #include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
  2. #include <opencv2/opencv.hpp> // Include OpenCV API
  3. #include "../cv-helpers.hpp" // Helper functions for conversions between RealSense and OpenCV
  4. int main(int argc, char * argv[]) try
  5. {
  6. using namespace cv;
  7. using namespace rs2;
  8. // Define colorizer and align processing-blocks
  9. colorizer colorize;
  10. align align_to(RS2_STREAM_COLOR);
  11. // Start the camera
  12. pipeline pipe;
  13. pipe.start();
  14. const auto window_name = "Display Image";
  15. namedWindow(window_name, WINDOW_AUTOSIZE);
  16. // We are using StructuringElement for erode / dilate operations
  17. auto gen_element = [](int erosion_size)
  18. {
  19. return getStructuringElement(MORPH_RECT,
  20. Size(erosion_size + 1, erosion_size + 1),
  21. Point(erosion_size, erosion_size));
  22. };
  23. const int erosion_size = 3;
  24. auto erode_less = gen_element(erosion_size);
  25. auto erode_more = gen_element(erosion_size * 2);
  26. // The following operation is taking grayscale image,
  27. // performs threashold on it, closes small holes and erodes the white area
  28. auto create_mask_from_depth = [&](Mat& depth, int thresh, ThresholdTypes type)
  29. {
  30. threshold(depth, depth, thresh, 255, type);
  31. dilate(depth, depth, erode_less);
  32. erode(depth, depth, erode_more);
  33. };
  34. // Skips some frames to allow for auto-exposure stabilization
  35. for (int i = 0; i < 10; i++) pipe.wait_for_frames();
  36. while (waitKey(1) < 0 && getWindowProperty(window_name, WND_PROP_AUTOSIZE) >= 0)
  37. {
  38. frameset data = pipe.wait_for_frames();
  39. // Make sure the frameset is spatialy aligned
  40. // (each pixel in depth image corresponds to the same pixel in the color image)
  41. frameset aligned_set = align_to.process(data);
  42. frame depth = aligned_set.get_depth_frame();
  43. auto color_mat = frame_to_mat(aligned_set.get_color_frame());
  44. // Colorize depth image with white being near and black being far
  45. // This will take advantage of histogram equalization done by the colorizer
  46. colorize.set_option(RS2_OPTION_COLOR_SCHEME, 2);
  47. frame bw_depth = depth.apply_filter(colorize);
  48. // Generate "near" mask image:
  49. auto near = frame_to_mat(bw_depth);
  50. cvtColor(near, near, COLOR_BGR2GRAY);
  51. // Take just values within range [180-255]
  52. // These will roughly correspond to near objects due to histogram equalization
  53. create_mask_from_depth(near, 180, THRESH_BINARY);
  54. // Generate "far" mask image:
  55. auto far = frame_to_mat(bw_depth);
  56. cvtColor(far, far, COLOR_BGR2GRAY);
  57. far.setTo(255, far == 0); // Note: 0 value does not indicate pixel near the camera, and requires special attention
  58. create_mask_from_depth(far, 100, THRESH_BINARY_INV);
  59. // GrabCut algorithm needs a mask with every pixel marked as either:
  60. // BGD, FGB, PR_BGD, PR_FGB
  61. Mat mask;
  62. mask.create(near.size(), CV_8UC1);
  63. mask.setTo(Scalar::all(GC_BGD)); // Set "background" as default guess
  64. mask.setTo(GC_PR_BGD, far == 0); // Relax this to "probably background" for pixels outside "far" region
  65. mask.setTo(GC_FGD, near == 255); // Set pixels within the "near" region to "foreground"
  66. // Run Grab-Cut algorithm:
  67. Mat bgModel, fgModel;
  68. grabCut(color_mat, mask, Rect(), bgModel, fgModel, 1, GC_INIT_WITH_MASK);
  69. // Extract foreground pixels based on refined mask from the algorithm
  70. Mat3b foreground = Mat3b::zeros(color_mat.rows, color_mat.cols);
  71. color_mat.copyTo(foreground, (mask == GC_FGD) | (mask == GC_PR_FGD));
  72. imshow(window_name, foreground);
  73. }
  74. return EXIT_SUCCESS;
  75. }
  76. catch (const rs2::error & e)
  77. {
  78. std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
  79. return EXIT_FAILURE;
  80. }
  81. catch (const std::exception& e)
  82. {
  83. std::cerr << e.what() << std::endl;
  84. return EXIT_FAILURE;
  85. }

用于避免碰撞的深度过滤‎

问题陈述

避免碰撞的问题优先考虑具有可靠的深度而不是高填充率。‎
‎在基于立体的系统中,由于一些光学和算法效应,包括校正过程中的重复几何形状和摩尔纹图案,可能会发生不可靠的读数。‎
‎有几种最广为人知的方法可以删除此类无效的深度值:‎

  1. ‎如果可能,增加红外投影仪将在图像中引入足够数量的噪声,并帮助算法正确解决有问题的情况。‎
  2. ‎除1之外,使用光学滤光片阻挡可见光以仅省略投影仪图案将消除错误的近邻深度值。‎
  3. ‎D400系列相机包括一组控制深度失效的片上参数。加载自定义"高置信度"预设将有助于ASIC丢弃不明确的像素。‎
  4. ‎最后,可以应用软件后处理来仅保留高置信深度值。‎

高置信度预设

用于在流式传输之前将自定义预设加载到设备:

  1. rs2::pipeline pipe;
  2. rs2::config cfg;
  3. cfg.enable_stream(RS2_STREAM_DEPTH, 848, 480);
  4. cfg.enable_stream(RS2_STREAM_INFRARED, 1);
  5. std::ifstream file("./camera-settings.json");
  6. if (file.good())
  7. {
  8. std::string str((std::istreambuf_iterator<char>(file)), std::istreambuf_iterator<char>());
  9. auto prof = cfg.resolve(pipe);
  10. if (auto advanced = prof.get_device().as<rs400::advanced_mode>())
  11. {
  12. advanced.load_json(str);
  13. }
  14. }

high_confidence_filter‎类

接下来,我们定义类。继承并实现 SDK 处理块模式使此算法可与其他 SDK 方法(如 和 ) 组合在一起。‎
‎特别是,将消耗同步深度和红外对,并输出新的同步对的缩减采样和滤波深度和红外帧。‎

  1. high_confidence_filter
  2. rs2::filter
  3. rs2::pointcloud
  4. rs2::alignhigh_confidence_filter

该算法背后的核心思想是,在红外图像中具有明确定义的特征的区域更有可能具有高置信度的相应深度。该算法在深度和红外图像上同时运行,并遮罩除边缘和角落之外的所有内容。‎

下采样步骤

缩减采样是任何深度处理算法中非常常见的第一步。关键的观察结果是,缩减采样会降低空间(X-Y)精度,但会保持Z精度( 下图中的绿轴,也就是竖直方向)。‎

它是如此普遍,以至于SDK以‎‎rs2::d ecimation_filter‎‎的形式提供了内置的缩减采样方法。‎
‎请务必注意,使用标准 OpenCV 缩减像素采样对于深度图像并不理想。‎
‎在此示例中,我们展示了实现深度缩减采样的另一种正确方法。它在概念上类似于 为每个 4x4 块选取一个非零深度值,但与 它不同的是选取‎‎最接近‎‎的深度值而不是中值。这在避免碰撞的背景下是有道理的,因为我们希望保持与物体的最小距离。‎

rs2::decimation_filterrs2::decimation_filter

‎这种方法的朴素实现:‎

  1. for (int y = 0; y < sizeYresized; y++)
  2. for (int x = 0; x < source.cols; x += DOWNSAMPLE_FACTOR)
  3. {
  4. uint16_t min_value = MAX_DEPTH;
  5. // Loop over 4x4 quad
  6. for (int i = 0; i < DOWNSAMPLE_FACTOR; i++)
  7. for (int j = 0; j < DOWNSAMPLE_FACTOR; j++)
  8. {
  9. auto pixel = source.at<uint16_t>(y * DOWNSAMPLE_FACTOR + i, x + j);
  10. // Only include non-zero pixels in min calculation
  11. if (pixel) min_value = std::min(min_value, pixel);
  12. }
  13. // If no non-zero pixels were found, mark the output as zero
  14. if (min_value == MAX_DEPTH) min_value = 0;
  15. pDest->at<uint16_t>(y, x / DOWNSAMPLE_FACTOR) = min_value;
  16. }

‎主过滤器‎

‎核心过滤器按以下操作顺序执行:

  1. filter_edges(&sub_areas[i]); // Find edges in the infrared image
  2. filter_harris(&sub_areas[i]); // Find corners in the infrared image
  3. // Combine the two masks together:
  4. cv::bitwise_or(sub_areas[i].edge_mask, sub_areas[i].harris_mask, sub_areas[i].combined_mask);
  5. // morphology: open(src, element) = dilate(erode(src,element))
  6. cv::morphologyEx(sub_areas[i].combined_mask, sub_areas[i].combined_mask,
  7. cv::MORPH_OPEN, cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(3, 3)));
  8. // Copy masked depth values:
  9. sub_areas[i].decimated_depth.copyTo(sub_areas[i].output_depth, sub_areas[i].combined_mask);

‎所有 OpenCV 矩阵都拆分为多个部分 - sub_areas[i]。这样做是为了帮助并行化代码,这样每个执行线程都可以在单独的图像区域上运行。‎

边缘过滤是使用 ‎‎OpenCV Scharr 运算符‎‎完成的:‎

  1. cv::Scharr(area->decimated_ir, area->scharr_x, CV_16S, 1, 0);
  2. cv::convertScaleAbs(area->scharr_x, area->abs_scharr_x);
  3. cv::Scharr(area->decimated_ir, area->scharr_y, CV_16S, 0, 1);
  4. cv::convertScaleAbs(area->scharr_y, area->abs_scharr_y);
  5. cv::addWeighted(area->abs_scharr_y, 0.5, area->abs_scharr_y, 0.5, 0, area->edge_mask);
  6. cv::threshold(area->edge_mask, area->edge_mask, 192, 255, cv::THRESH_BINARY);

‎角落滤波是使用‎‎OpenCV Harris探测器‎‎完成的:

  1. area->decimated_ir.convertTo(area->float_ir, CV_32F);
  2. // Harris 角点检测API
  3. cv::cornerHarris(area->float_ir, area->corners, 2, 3, 0.04);
  4. cv::threshold(area->corners, area->corners, 300, 255, cv::THRESH_BINARY);
  5. area->corners.convertTo(area->harris_mask, CV_8U);|

上面运用了 Harris 角点检测,相关教程:

 [opencv_C++] 入门强推!!!【B站最全】_哔哩哔哩_bilibili

一幅图像分为:平坦区域 ,边缘 和角点 。

何为角点?

下面有两幅不同视角的图像,通过找出对应的角点进行匹配。

 再看下图所示,放大图像的两处角点区域:

直观的概括下角点所具有的特征:

>轮廓之间的交点;

>对于同一场景,即使视角发生变化,通常具备稳定性质的特征;

>该点附近区域的像素点无论在梯度方向上还是其梯度幅值上有着较大变化;

角点检测算法基本思想是什么?

算法基本思想是使用一个固定窗口在图像上进行任意方向上的滑动,比较滑动前与滑动后两种情况,窗口中的像素灰度变化程度如果存在任意方向上的滑动,都有着较大灰度变化,那么我们可以认为该窗口中存在角点。

‎软件开发工具包集成‎

‎该方法负责将输入帧转换为对象,并将生成的对象转换为新对象。这个额外的层确保了算法和 SDK 之间的无缝互操作性。‎
‎算法输出以后可用于点云生成和导出、流对齐、彩色可视化,并与其他 SDK 后处理块结合使用。‎

sdk_handlecv::Matcv::Matrs2::frame

‎检测到新的输入帧类型后,将生成新的 SDK 视频流配置文件,其分辨率大幅下降,并更新了内部函数。

‎sdk_handle
  1. if (!_output_ir_profile || _input_ir_profile.get() != ir_frame.get_profile().get())
  2. {
  3. auto p = ir_frame.get_profile().as<rs2::video_stream_profile>();
  4. auto intr = p.get_intrinsics() / DOWNSAMPLE_FACTOR;
  5. _input_ir_profile = p;
  6. _output_ir_profile = p.clone(p.stream_type(), p.stream_index(), p.format(),
  7. p.width() / DOWNSAMPLE_FACTOR, p.height() / DOWNSAMPLE_FACTOR, intr);
  8. }

‎输出图像准备就绪后,将其复制到新的:

rs2::frame
  1. auto res_ir = src.allocate_video_frame(_output_ir_profile, ir_frame, 0,
  2. newW, newH, ir_frame.get_bytes_per_pixel() * newW);
  3. memcpy((void*)res_ir.get_data(), _decimated_ir.data, newW * newH);

最后,两个生成的帧(深度和红外)一起输出为:

‎rs2::frameset
  1. std::vector<rs2::frame> fs{ res_ir, res_depth };
  2. auto cmp = src.allocate_composite_frame(fs);
  3. src.frame_ready(cmp);

‎一旦包装为一种算法,就可以像任何其他 SDK 处理块一样应用该算法:

‎rs2::filter
  1. rs2::frameset data = pipe.wait_for_frames();
  2. data = data.apply_filter(filter);
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/笔触狂放9/article/detail/176084
推荐阅读
相关标签
  

闽ICP备14008679号