当前位置:   article > 正文

双目立体成像(三)视差图空洞填充_双目视差图

双目视差图

双目立体成像(二)中,对图像进行畸变矫正和立体校正之后生成的点云图效果有所改进,但是可以看出恢复出的点云图有很多洞。视差图中视差不可靠值大多数是由于遮挡或光照不均匀引起的,可以用附近可靠视差值进行填充。步骤如下:

① 计算视差图的积分图integral,并保存对应积分图中每个积分值处所有累加的像素点个数n(空洞处的像素点不计入n中,因为空洞处像素值为0,对积分值没有任何作用,反而会平滑图像)。

② 采用多层次均值滤波。首先以一个较大的初始窗口去做均值滤波,将大区域的空洞赋值。然后下次滤波时,将窗口尺寸缩小为原来的一半,利用原来的积分图再次滤波,给较小的空洞赋值(覆盖原来的值);依次类推,直至窗口大小变为3x3,此时停止滤波,得到最终结果。

③ 多层次滤波考虑的是对于初始较大的空洞区域,需要参考更多的邻域值,如果采用较小的滤波窗口,不能够完全填充,而如果全部采用较大的窗口,则图像会被严重平滑。因此根据空洞的大小,不断调整滤波窗口。先用大窗口给所有空洞赋值,然后利用逐渐变成小窗口滤波覆盖原来的值,这样既能保证空洞能被填充上,也能保证图像不会被过度平滑。

空洞填充的函数代码如下:

  1. void insertDepth32f(cv::Mat& depth)
  2. {
  3. const int width = depth.cols;
  4. const int height = depth.rows;
  5. float* data = (float*)depth.data;
  6. cv::Mat integralMap = cv::Mat::zeros(height, width, CV_64F);
  7. cv::Mat ptsMap = cv::Mat::zeros(height, width, CV_32S);
  8. double* integral = (double*)integralMap.data;
  9. int* ptsIntegral = (int*)ptsMap.data;
  10. memset(integral, 0, sizeof(double) * width * height);
  11. memset(ptsIntegral, 0, sizeof(int) * width * height);
  12. for (int i = 0; i < height; ++i)
  13. {
  14. int id1 = i * width;
  15. for (int j = 0; j < width; ++j)
  16. {
  17. int id2 = id1 + j;
  18. if (data[id2] > 1e-3)
  19. {
  20. integral[id2] = data[id2];
  21. ptsIntegral[id2] = 1;
  22. }
  23. }
  24. }
  25. // 积分区间
  26. for (int i = 0; i < height; ++i)
  27. {
  28. int id1 = i * width;
  29. for (int j = 1; j < width; ++j)
  30. {
  31. int id2 = id1 + j;
  32. integral[id2] += integral[id2 - 1];
  33. ptsIntegral[id2] += ptsIntegral[id2 - 1];
  34. }
  35. }
  36. for (int i = 1; i < height; ++i)
  37. {
  38. int id1 = i * width;
  39. for (int j = 0; j < width; ++j)
  40. {
  41. int id2 = id1 + j;
  42. integral[id2] += integral[id2 - width];
  43. ptsIntegral[id2] += ptsIntegral[id2 - width];
  44. }
  45. }
  46. int wnd;
  47. double dWnd = 2;
  48. while (dWnd > 1)
  49. {
  50. wnd = int(dWnd);
  51. dWnd /= 2;
  52. for (int i = 0; i < height; ++i)
  53. {
  54. int id1 = i * width;
  55. for (int j = 0; j < width; ++j)
  56. {
  57. int id2 = id1 + j;
  58. int left = j - wnd - 1;
  59. int right = j + wnd;
  60. int top = i - wnd - 1;
  61. int bot = i + wnd;
  62. left = max(0, left);
  63. right = min(right, width - 1);
  64. top = max(0, top);
  65. bot = min(bot, height - 1);
  66. int dx = right - left;
  67. int dy = (bot - top) * width;
  68. int idLeftTop = top * width + left;
  69. int idRightTop = idLeftTop + dx;
  70. int idLeftBot = idLeftTop + dy;
  71. int idRightBot = idLeftBot + dx;
  72. int ptsCnt = ptsIntegral[idRightBot] + ptsIntegral[idLeftTop] - (ptsIntegral[idLeftBot] + ptsIntegral[idRightTop]);
  73. double sumGray = integral[idRightBot] + integral[idLeftTop] - (integral[idLeftBot] + integral[idRightTop]);
  74. if (ptsCnt <= 0)
  75. {
  76. continue;
  77. }
  78. data[id2] = float(sumGray / ptsCnt);
  79. }
  80. }
  81. int s = wnd / 2 * 2 + 1;
  82. if (s > 201)
  83. {
  84. s = 201;
  85. }
  86. cv::GaussianBlur(depth, depth, cv::Size(s, s), s, s);
  87. }
  88. }

将左右目图片去畸变和立体校正之后计算视差,再对视差图进行空洞填补。

原图:

去畸变&立体校正后:

生成的点云图:

空洞填充后生成的点云图:

可以看出虽然物体部分恢复的细节比未填充多,但是白墙这种无纹理得区域还是空洞的。一般默认算的是左视差图,如果需要计算右视差图,则设置参数是要注意最小视差值视差范围。由于视差值计算出来为负值,disp类型为16SC1,因此需要取绝对值,然后保存,代码如下:

  1. #include <opencv2/opencv.hpp>
  2. #include <vector>
  3. #include <string>
  4. #include <Eigen/Core>
  5. #include <pangolin/pangolin.h>
  6. #include <unistd.h>
  7. using namespace std;
  8. using namespace Eigen;
  9. void showPointCloud(const vector<Vector4d, aligned_allocator<Vector4d>> &pointcloud);
  10. void insertDepth32f(cv::Mat& depth)
  11. {
  12. const int width = depth.cols;
  13. const int height = depth.rows;
  14. float* data = (float*)depth.data;
  15. cv::Mat integralMap = cv::Mat::zeros(height, width, CV_64F);
  16. cv::Mat ptsMap = cv::Mat::zeros(height, width, CV_32S);
  17. double* integral = (double*)integralMap.data;
  18. int* ptsIntegral = (int*)ptsMap.data;
  19. memset(integral, 0, sizeof(double) * width * height);
  20. memset(ptsIntegral, 0, sizeof(int) * width * height);
  21. for (int i = 0; i < height; ++i)
  22. {
  23. int id1 = i * width;
  24. for (int j = 0; j < width; ++j)
  25. {
  26. int id2 = id1 + j;
  27. if (data[id2] > 1e-3)
  28. {
  29. integral[id2] = data[id2];
  30. ptsIntegral[id2] = 1;
  31. }
  32. }
  33. }
  34. // 积分区间
  35. for (int i = 0; i < height; ++i)
  36. {
  37. int id1 = i * width;
  38. for (int j = 1; j < width; ++j)
  39. {
  40. int id2 = id1 + j;
  41. integral[id2] += integral[id2 - 1];
  42. ptsIntegral[id2] += ptsIntegral[id2 - 1];
  43. }
  44. }
  45. for (int i = 1; i < height; ++i)
  46. {
  47. int id1 = i * width;
  48. for (int j = 0; j < width; ++j)
  49. {
  50. int id2 = id1 + j;
  51. integral[id2] += integral[id2 - width];
  52. ptsIntegral[id2] += ptsIntegral[id2 - width];
  53. }
  54. }
  55. int wnd;
  56. double dWnd = 2;
  57. while (dWnd > 1)
  58. {
  59. wnd = int(dWnd);
  60. dWnd /= 2;
  61. for (int i = 0; i < height; ++i)
  62. {
  63. int id1 = i * width;
  64. for (int j = 0; j < width; ++j)
  65. {
  66. int id2 = id1 + j;
  67. int left = j - wnd - 1;
  68. int right = j + wnd;
  69. int top = i - wnd - 1;
  70. int bot = i + wnd;
  71. left = max(0, left);
  72. right = min(right, width - 1);
  73. top = max(0, top);
  74. bot = min(bot, height - 1);
  75. int dx = right - left;
  76. int dy = (bot - top) * width;
  77. int idLeftTop = top * width + left;
  78. int idRightTop = idLeftTop + dx;
  79. int idLeftBot = idLeftTop + dy;
  80. int idRightBot = idLeftBot + dx;
  81. int ptsCnt = ptsIntegral[idRightBot] + ptsIntegral[idLeftTop] - (ptsIntegral[idLeftBot] + ptsIntegral[idRightTop]);
  82. double sumGray = integral[idRightBot] + integral[idLeftTop] - (integral[idLeftBot] + integral[idRightTop]);
  83. if (ptsCnt <= 0)
  84. {
  85. continue;
  86. }
  87. data[id2] = float(sumGray / ptsCnt);
  88. }
  89. }
  90. int s = wnd / 2 * 2 + 1;
  91. if (s > 201)
  92. {
  93. s = 201;
  94. }
  95. cv::GaussianBlur(depth, depth, cv::Size(s, s), s, s);
  96. }
  97. }
  98. int main(int argc,char **argv)
  99. {
  100. if(argc!=3)
  101. {
  102. cout<<"ERROR:Please input pictures!"<<endl;
  103. }
  104. cv::Mat left_image=cv::imread(argv[1],0);
  105. cv::Mat right_image=cv::imread(argv[2],0);
  106. // 内参
  107. double fx = 392.7540541781677, fy = 395.9067564822596, cx = 320.5167125964447, cy = 214.0481721219656;
  108. // 基线
  109. double b=60;//基线单位是mm
  110. cv::Mat disparity_sgbm,disparity;
  111. //参数设置
  112. enum { STEREO_BM = 0, STEREO_SGBM = 1, STEREO_HH = 2, STEREO_VAR = 3, STEREO_3WAY = 4 };
  113. int numberOfDisparities = ((left_image.size().width / 8) + 15) & -16;
  114. cv::Ptr<cv::StereoSGBM> sgbm = cv::StereoSGBM::create(0, 16, 3);
  115. sgbm->setPreFilterCap(63);
  116. int SADWindowSize = 9;
  117. int sgbmWinSize = SADWindowSize > 0 ? SADWindowSize : 3;
  118. sgbm->setBlockSize(sgbmWinSize);
  119. int cn = left_image.channels();
  120. sgbm->setP1(8 * cn*sgbmWinSize*sgbmWinSize);
  121. sgbm->setP2(32 * cn*sgbmWinSize*sgbmWinSize);
  122. sgbm->setMinDisparity(0);
  123. sgbm->setNumDisparities(numberOfDisparities);
  124. sgbm->setUniquenessRatio(10);
  125. sgbm->setSpeckleWindowSize(100);
  126. sgbm->setSpeckleRange(32);
  127. sgbm->setDisp12MaxDiff(1);
  128. int alg = STEREO_SGBM;
  129. if (alg == STEREO_HH)
  130. sgbm->setMode(cv::StereoSGBM::MODE_HH);
  131. else if (alg == STEREO_SGBM)
  132. sgbm->setMode(cv::StereoSGBM::MODE_SGBM);
  133. else if (alg == STEREO_3WAY)
  134. sgbm->setMode(cv::StereoSGBM::MODE_SGBM_3WAY);
  135. //参数设置
  136. sgbm->compute(left_image,right_image,disparity_sgbm);
  137. disparity_sgbm.convertTo(disparity,CV_32F,1.0/16.0f);
  138. // 视差图后处理
  139. insertDepth32f(disparity);
  140. cv::imwrite("./disparity.jpg",disparity);
  141. vector<Vector4d,Eigen::aligned_allocator<Vector4d>> pointcloud;
  142. for(int v=0;v<left_image.rows;v++)
  143. {
  144. for(int u=0;u<left_image.cols;u++)
  145. {
  146. if(disparity.at<float>(v,u)<=0.0||disparity.at<float>(v,u)>=96.0)
  147. {
  148. continue;
  149. }
  150. Vector4d point(0,0,0,left_image.at<uchar>(v,u)/255.0);
  151. double x = (u-cx)/fx;
  152. double y = (v-cy)/fy;
  153. double depth=fx*b/(disparity.at<float>(v,u));
  154. // double depth=depthImg_left.at<float>(v,u);
  155. point[0]=x*depth;
  156. point[1]=y*depth;
  157. point[2]=depth;
  158. pointcloud.push_back(point);
  159. }
  160. }
  161. // 右视差图
  162. cv::Mat disparity_sgbm_r,disparity_r;
  163. int numberOfDisparities_r = ((right_image.size().width / 8) + 15) & -16;
  164. cv::Ptr<cv::StereoSGBM> sgbm_r = cv::StereoSGBM::create(0, 16, 3);
  165. sgbm_r->setPreFilterCap(63);
  166. sgbm_r->setBlockSize(sgbmWinSize);
  167. int cn_r = right_image.channels();
  168. sgbm_r->setP1(8 * cn_r*sgbmWinSize*sgbmWinSize);
  169. sgbm_r->setP2(32 * cn_r*sgbmWinSize*sgbmWinSize);
  170. sgbm_r->setMinDisparity(-numberOfDisparities_r);
  171. sgbm_r->setNumDisparities(numberOfDisparities_r);
  172. sgbm_r->setUniquenessRatio(10);
  173. sgbm_r->setSpeckleWindowSize(100);
  174. sgbm_r->setSpeckleRange(32);
  175. sgbm_r->setDisp12MaxDiff(1);
  176. if (alg == STEREO_HH)
  177. sgbm_r->setMode(cv::StereoSGBM::MODE_HH);
  178. else if (alg == STEREO_SGBM)
  179. sgbm_r->setMode(cv::StereoSGBM::MODE_SGBM);
  180. else if (alg == STEREO_3WAY)
  181. sgbm_r->setMode(cv::StereoSGBM::MODE_SGBM_3WAY);
  182. sgbm_r->compute(right_image,left_image,disparity_sgbm_r);
  183. disparity_sgbm_r.convertTo(disparity_r,CV_32F,1.0/16.0f);
  184. disparity_r=abs(disparity_r);
  185. insertDepth32f(disparity_r);
  186. cv::imwrite("./disparity_r.jpg",disparity_r);
  187. vector<Vector4d,Eigen::aligned_allocator<Vector4d>> pointcloud_r;
  188. for(int v=0;v<right_image.rows;v++)
  189. {
  190. for(int u=0;u<right_image.cols;u++)
  191. {
  192. if(disparity_r.at<float>(v,u)<=10.0||disparity_r.at<float>(v,u)>=96.0)
  193. {
  194. continue;
  195. }
  196. Vector4d point1(0,0,0,right_image.at<uchar>(v,u)/255.0);
  197. double x1 = (u-cx)/fx;
  198. double y1 = (v-cy)/fy;
  199. double depth1=fx*b/(disparity_r.at<float>(v,u));
  200. // double depth1=depthImg_right.at<float>(v,u);
  201. point1[0]=x1*depth1;
  202. point1[1]=y1*depth1;
  203. point1[2]=depth1;
  204. pointcloud_r.push_back(point1);
  205. }
  206. }
  207. //右视差图
  208. cv::imshow("disparity",disparity/96.0);
  209. cv::imshow("disparity_r",disparity_r/96.0);
  210. // cv::imshow("depth_left",depthImg_left/96.0);
  211. // cv::imshow("depth_right",depthImg_right/96.0);
  212. cv::waitKey(0);
  213. cout<<argv[3]<<endl;
  214. //showPointCloud(pointcloud);
  215. showPointCloud(pointcloud_r);
  216. return 0;
  217. }
  218. void showPointCloud(const vector<Vector4d, Eigen::aligned_allocator<Vector4d>> &pointcloud)
  219. {
  220. if (pointcloud.empty()) {
  221. cerr << "Point cloud is empty!" << endl;
  222. return;
  223. }
  224. //创建一个pangolin窗口,窗口名“Point Cloud Viewer”,窗口宽度1024,窗口高度768
  225. pangolin::CreateWindowAndBind("Point Cloud Viewer", 1024, 768);
  226. glEnable(GL_DEPTH_TEST);//根据物体远近,实现遮挡效果
  227. glEnable(GL_BLEND);//使用颜色混合模型,让物体显示半透明效果
  228. //GL_SRC_ALPHA表示使用源颜色的alpha值作为权重因子,GL_ONE_MINUS_SRC_ALPHA表示使用(1-源颜色的alpha值)作为权重因子
  229. glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
  230. /*
  231. //ProjectionMatrix()中各参数依次为
  232. 图像宽度=1024
  233. 图像高度=768
  234. fx=500、fy=500
  235. cx=512、cy=389
  236. 最近距离=0.1和最远距离=1000
  237. *****************************************************************
  238. ModelViewLookAt()中各参数为相机位置,被观察点位置和相机哪个轴朝上
  239. ModelViewLookAt(0, -0.1, -1.8, 0, 0, 0, 0.0, -1.0, 0.0)
  240. 表示相机在(0, -0.1, -1.8)位置处观看视点(0, 0, 0),
  241. 并设置相机XYZ轴正方向为(0,-1,0),即右上前
  242. */
  243. pangolin::OpenGlRenderState s_cam(
  244. pangolin::ProjectionMatrix(1024, 768, 500, 500, 512, 389, 0.1, 1000),
  245. pangolin::ModelViewLookAt(0, -0.1, -1.8, 0, 0, 0, 0.0, -1.0, 0.0)
  246. );
  247. //创建交互视图,显示上一帧图像内容
  248. pangolin::View &d_cam = pangolin::CreateDisplay()
  249. .SetBounds(0.0, 1.0, pangolin::Attach::Pix(175), 1.0, -1024.0f / 768.0f)
  250. .SetHandler(new pangolin::Handler3D(s_cam));
  251. /*
  252. SetBounds()内的前4个参数分别表示交互视图的大小,均为相对值,范围在0.0至1.0之间
  253. 第1个参数表示bottom,即为视图最下面在整个窗口中的位置
  254. 第2个参数为top,即为视图最上面在整个窗口中的位置
  255. 第3个参数为left,即视图最左边在整个窗口中的位置
  256. 第4个参数为right,即为视图最右边在整个窗口中的位置
  257. 第5个参数为aspect,表示横纵比
  258. */
  259. //如果pangolin窗口没有关闭,则执行
  260. while (pangolin::ShouldQuit() == false)
  261. {
  262. //清空颜色和深度缓存,使得前后帧不会互相干扰
  263. glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
  264. //激活显示,并设置相机状态
  265. d_cam.Activate(s_cam);
  266. //设置背景颜色为白色
  267. glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
  268. glPointSize(2);
  269. //绘制点云
  270. glBegin(GL_POINTS);
  271. for (auto &p: pointcloud)
  272. {
  273. //设置颜色信息
  274. glColor3f(p[3], p[3], p[3]);
  275. //设置位置信息
  276. glVertex3d(p[0], p[1], p[2]);
  277. }
  278. glEnd();
  279. //按照上面的设置执行渲染
  280. pangolin::FinishFrame();
  281. //停止执行5ms
  282. usleep(5000); // sleep 5 ms
  283. }
  284. return;
  285. }

右视差图

生成的点云图:

其他场景测试

原图校正后视差图点云图
原图校正后视差图点云图
原图校正后视差图点云图

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/2023面试高手/article/detail/90365
推荐阅读
相关标签
  

闽ICP备14008679号