赞
踩
最近接触稠密重建,跑了几个开源框架,看了下重建的效果。想深入了解框架相关算法和效果比较。于是找到了这篇文章:OPEN-SOURCE IMAGE-BASED 3D RECONSTRUCTION PIPELINES: REVIEW, COMPARISON AND EVALUATION
VisualSfM是最早使用一体GUI的开源工具,该工具由Wu等人开发,并整合了著名的PMVS/CMVS稠密重建方法。
再过去十几年中,许多人提供了完整的独立三维重建流程,比如COLMAP、MVS或者MVE。前面提到的Opensource解决方案主要由计算机视觉社区开发,其目标是更广泛的3D重建受众,因此,他们的主要目的不是指准确性,而是从任意规模和低几何质量的杂乱的图像中恢复逼真的3D模型。另一方面,MicMac3是一个完全开放的摄影测量流程,能够处理GCP和相机约束。
该文章主要对 openMVS 、 COLMAP 和 AliceVision 三个开源工具进行了评估和分析。三个开源流程之间的数据转换基本上已经由开发人员写成脚本工具了,如下表所示:
Colmap是一个开源的三维重建库,它实现了改进的SFM和MVS版本,还具有图形用户界面,可促进非专业人士使用。项目信息以数据库结构格式存储。关于特征点提取与匹配,它实现了众所周知的SIFT算法,并提供CPU和GPU选项,然后列出广泛的功能匹配选项,例如暴力匹配、顺序匹配、词典树、空间匹配、传递匹配和自定义匹配等方式。如果存在有效的几何关系映射(单应性矩阵或基本矩阵),则可以验证图像对,从而逐渐创建场景图。colmap中SFM流程是从严格的初始图像对的选择开始实现增量SFM,并应用鲁棒的次优视图选择方法,然后进行多视图的三角剖分。捆绑调整使用了Ceres和全局BA的方式来改善对相机和三维点的估计,避免出现漂移现象。
多视图立体重建是基于(Zheng等,2014)的框架实现的,该框架使用基于probabilistic patch-based stereo方法(Schönberger等,2016a)实现。
OpenMVG基于标准的多视图几何学原理提供了完整而整洁的SFM流程。特征检测和描述是通过SIFT和Akaze实现的,而对不变区域的检测和描述也可以使用(Xu等,2014;Nistér和Stewénius,2008)。特征匹配有经典的暴力匹配、ANN-kD树和哈希。图像对的几何验证与Colmap的方法几乎一致。可以使用增量(Moulon等,2012)或Global(Moulon等,2013)方法来计算稀疏重建方法,然后使用ceres求解器进行捆绑调整。对于这种流程组合,我们考虑了patch-based stereo方法用于大规模场景的OpenMVS库实现的密集重建(shen,2013)。
具体结果请自行查看论文,这种对比方法是否有效有待商榷,还是得具体场景具体分析。
VisualSfM:
[1] Wu, C., 2013. Towards linear-time incremental structure from motion. In
Proc. 3DV, pp. 127-134.
[2] Wu, C., Agarwal, S., Curless, B. and Seitz, S.M., 2011. Multicore bundle
adjustment. In Proc. CVPR, pp. 3057-3064.
[3] Furukawa, Y. and Ponce, J., 2009. Accurate, dense, and robust multiview
stereopsis. IEEE transactions on pattern analysis and machine intelligence, Vol. 32(8), pp.1362-1376.
[4] Furukawa, Y., Curless, B., Seitz, S.M. and Szeliski, R., 2010. Towards internet-scale multi-view stereo. In Proc. CVPR, pp. 1434-1441.
OpenMVG+OpenMVS:
[1] Moulon, P., Monasse, P., Perrot, R. and Marlet, R., 2016. OpenMVG: Open multiple view geometry. In International Workshop on Reproducible Research in Pattern Recognition, pp. 60-74.
[2] Moulon, P., Monasse, P. and Marlet, R., 2013. Global fusion of relative motions for robust, accurate and scalable structure from motion. In Proc. IEEE ICCV, pp. 3248-3255.
[3] Moulon, P., Monasse, P. and Marlet, R., 2012. Adaptive structure from
motion with a contrario model estimation. In Asian Conference on Computer Vision, pp. 257-270, Springer, Berlin, Heidelberg.
[4] Moulon, P. and Monasse, P., 2012. Unordered feature tracking made fast
and easy. In CVMP 2012.
[5] Sweeney, C., Hollerer, T. and Turk, M., 2015. Theia: A fast and scalable
structure-from-motion library. In Proceedings of the 23rd ACM Int. Conf.
on Multimedia, pp. 693-696. ACM.
[6] Shen, S., 2013. Accurate multiple view 3D reconstruction using patch-based stereo for large-scale scenes. IEEE Transactions on Image Processing, Vol. 22(5), pp.1901-1914.
COLMAP:
[1] Schönberger, J.L. and Frahm, J.M., 2016. Structure-from-motion revisited. In Proc. CVPR, pp. 4104-4113.(colmap sfm)
[2] Schönberger, J.L., Zheng, E., Frahm, J.M. and Pollefeys, M., 2016a. Pixelwise view selection for unstructured multi-view stereo. In Proc. ECCV, pp. 501-518.(colmap mvs)
[3] Schönberger, J.L.; Price, T.; Sattler, T.; Frahm, J.M.; Pollefeys, M., 2016b. A vote-and-verify strategy for fast spatial verification in image retrieval. In Asian Conference on Computer Vision, pp. 321-337.
[4] Schöps, T., Schönberger, J.L., Galliani, S., Sattler, T., Schindler, K.,Pollefeys, M. and Geiger, A., 2017. A multi-view stereo benchmark with high-resolution images and multi-camera videos. In Proc. CVPR, pp. 3260-3269.
[5] Zheng, E., Dunn, E., Jojic, V. and Frahm, J.M., 2014. Patchmatch based
joint view selection and depthmap estimation. In Proc. CVPR, pp. 1510-
1517.
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。