当前位置:   article > 正文

Faster-lio论文翻译_fasterlio

fasterlio

摘要

        本文提出了一种基于增量体素的激光惯性里程计(LIO)方法,用于快速跟踪旋转和固态激光雷达。为了实现快速的跟踪速度,我们既没有使用复杂的基于树的结构来划分空间点云,也没有使用严格的k最近邻(k-NN)查询计算点匹配。相反,我们使用增量体素(iVox)作为我们的点云空间数据结构(增量体素是对传统体素的修改、支持增量插入和并行近似k-NN查找)。我们把线性iVox和PHC(伪希尔伯特曲线)iVox作为我们算法两种可选的底层数据结构。实验结果表明,仅仅使用现代的CPU,iVox在固态激光雷达中的每次扫描中可以达到1000~2000hz,而32线激光雷达则超过了200hz,同时仍保持相同水平的精度。

1 简介

2 相关工作

3 IVox:Incremental Sparse Voxels

A. IVox的数据结构

B. k-NN搜索

C. 增量地图

4 IVox-PHC

A. iVox-PHC的底层结构

B. iVox-PHC的k-NN搜索

C. iVox-PHC和iVox的复杂度

5 实验

6 结论

7 参考文献

[1] C. Le Gentil, T. Vidal-Calleja, and S. Huang, “IN2LAAMA: Inertial LiDAR localization autocalibration and mapping,” IEEE Trans. Robot., vol. 37, no. 1, pp. 275–290, Feb. 2021.
[2] M. Yokozuka, K. Koide, S. Oishi, and A. Banno, “LiTAMIN: LiDARbased tracking and mapping by stabilized ICP for geometry approximation with normal distributions,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. 2020, pp. 5143–5150.
[3] G. Xiang et al., “Fully automatic large-scale point cloud mapping for low-speed self-driving vehicles in unstructured environments,” in Proc. IEEE Intell. Veh. Symp., 2021, pp. 881–888.
[4] P. Wei, X. Wang, and Y. Guo, “3D-LIDAR feature based localization for autonomous vehicles,” in Proc. IEEE 16th Int. Conf. Automat. Sci. Eng., 2020, pp. 288–293.
[5] X. Zheng and J. Zhu, “Efficient LiDAR odometry for autonomous driving,” IEEE Robot. Automat. Lett. vol. 6, no. 4, pp. 8458–8465, Oct. 2021, arXiv:2104.10879.
[6] J. Zhang and S. Singh, “LOAM: LiDAR odometry and mapping in realtime,” in Robot.: Sci. Syst., vol. 2, no. 9, pp. 1–9, 2014.
[7] T. Shan and B. Englot, “LeGO-LOAM: Lightweight and ground-optimized LiDAR odometry and mapping on variable terrain,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2018, pp. 4758–4765.
[8] Z. Liu and F. Zhang, “BALM: Bundle adjustment for LiDAR mapping,” IEEE Robot. Automat. Lett., vol. 6, no. 2, pp. 3184–3191, Apr. 2021.
[9] Z. Liu, F. Zhang, and X. Hong, “Low-cost retina-like robotic LiDARs based on incommensurable scanning,” IEEE/ASME Trans. Mechatronics, vol. 27, no. 1, pp. 58–68, Feb. 2022.
[10] D. Wang, C. Watkins, and H. Xie, “MEMS mirrors for LiDAR: A review,” Micromachines, vol. 11, no. 5, p. 456, 2020.
[11] K. Li, M. Li, and U. D. Hanebeck, “Towards high-performance solidstate-LiDAR-inertial odometry andmapping,” IEEE Robot. Automat. Lett., vol. 6, no. 3, pp. 5167–5174, Jul. 2021.
[12] D. V. Nam and K. Gon-Woo, “Solid-state LiDAR based-SLAM: A concise review and application,” in Proc. IEEE Int. Conf. Big Data Smart Comput., 2021, pp. 302–305.
[13] P. Geneva, K. Eckenhoff, Y. Yang, and G. Huang, “Lips: LiDAR-inertial 3D plane SLAM,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2018, pp. 123–130.
[14] N. Rufus, U. K. R. Nair, A. S. B. Kumar, V. Madiraju, and K. M. Krishna, “SROM: Simple real-time odometry and mapping using LiDAR data for autonomous vehicles,” in Proc. IEEE Intell. Veh. Symp., 2020, pp. 1867–1872.
[15] W. Wang, J. Liu, C. Wang, B. Luo, and C. Zhang, “DV-LOAM: Direct visual LiDAR odometry and mapping,” Remote Sens., vol. 13, no. 16, 2021, Art. no. 3340.
[16] S. Hening, C. A. Ippolito, K. S. Krishnakumar, V. Stepanyan, and M. Teodorescu, “3D LiDAR SLAM integration with GPS/INS for UAVs in urban GPS-degraded environments,” in Proc. AIAA Inf. Syst.-AIAA Infotech, Aerosp., 2017, Art. no. 0448.
[17] C. Qian et al., “An integrated GNSS/INS/LiDAR-SLAM positioning method for highly accurate forest stem mapping,” Remote Sens., vol. 9, no. 1, p. 3, 2017.
[18] W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “FAST-LIO2: Fast direct LiDAR-inertial odometry,” IEEE Trans. Robot., 2022.
[19] X. Huang, G. Mei, J. Zhang, and R. Abbas, “A comprehensive survey on point cloud registration,” 2021, arXiv:2103.02690.
[20] N. Beckmann, H.-P. Kriegel, R. Schneider, and B. Seeger, “The R*-tree: An efficient and robust access method for points and rectangles,” in Proc. ACM SIGMOD Int. Conf. Manage. Data, 1990, pp. 322–331.
[21] M. Dolatshah, A. Hadian, and B. Minaei-Bidgoli, “Ball*-tree: Efficient spatial indexing for constrained nearest-neighbor search in metric spaces,” 2015, arXiv:1511.00628.
[22] K. Koide, M. Yokozuka, S. Oishi, and A. Banno, “Voxelized GICP for fast and accurate 3D point cloud registration,” in Proc. IEEE Int. Conf. Robot. Automat., 2021, pp. 11054–11059.
[23] Y. Cai, W. Xu, and F. Zhang, “ikd-Tree: An incremental KD tree for robotic applications,” 2021, arXiv:2102.10808.
[24] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, “LIO-SAM: Tightly-coupled LiDAR inertial odometry via smoothing and mapping,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2020, pp. 5135–5142.
[25] X. Zuo, P. Geneva, W. Lee, Y. Liu, and G. Huang, “LIC-Fusion: LiDARinertial-camera odometry,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2019, pp. 5848–5854.
[26] W. Xu and F. Zhang, “FAST-LIO: A fast, robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman filter,” IEEE Robot. Automat. Lett., vol. 6, no. 2, pp. 3317–3324, Apr. 2021.
[27] Y. Pan, P. Xiao, Y. He, Z. Shao, and Z. Li, “MULLS: Versatile LiDAR SLAM viamulti-metric linear least square,” inProc.IEEEInt. Conf. Robot. Automat.2021, pp. 11633–11640, arXiv:2102.03771.
[28] S. Zhao, Z. Fang, H. Li, and S. Scherer, “A robust laser-inertial odometry and mapping method for large-scale highway environments,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2019, pp. 1285–1292.

[29] M. Yokozuka, K. Koide, S. Oishi, and A. Banno, “LITAMIN2: Ultra light LiDAR-based SLAM using geometric approximation applied with KL-divergence,” in Proc. IEEE Int. Conf. Robot. Automat. 2021, pp. 11619–11625, arXiv:2103.00784.
[30] K. Koide, M. Yokozuka, S. Oishi, and A. Banno, “Globally consistent 3D LiDAR mapping with GPU-accelerated GICP matching cost factors,” IEEE Robot. Automat. Lett., vol. 6, no. 4, pp. 8591–8598, Oct. 2021.
[31] M. Karimi, M. Oelsch, O. Stengel, E. Babaians, and E. Steinbach, “LoLaSLAM: Low-latency LiDAR SLAM using continuous scan slicing,” IEEE Robot. Automat. Lett., vol. 6, no. 2, pp. 2248–2255, Apr. 2021.
[32] C. Qu, S. S. Shivakumar, W. Liu, and C. J. Taylor, “LLOL: Low-latency odometry for spinning LiDARs,” 2021, arXiv:2110.01725.
[33] Q. Li et al., “LO-Net: Deep real-time LiDAR odometry,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 8473–8482.
[34] C. Choy, W. Dong, and V. Koltun, “Deep global registration,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 2514–2523.
[35] M.Muglikar,Z.Zhang, andD. Scaramuzza, “Voxelmapfor visual SLAM,” in Proc. IEEE Int. Conf. Robot. Automat., 2020, pp. 4181–4187.
[36] M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3D reconstruction at scale using voxel hashing,” ACM Trans. Graph., vol. 32, no. 6, pp. 1–11, 2013.
[37] K. Daun, S. Kohlbrecher, J. Sturm, and O. von Stryk, “Large scale 2D laser SLAM using truncated signed distance functions,” in Proc. IEEE Int. Symp. Saf., Secur., Rescue Robot., 2019, pp. 222–228.
[38] M. Teschner, B. Heidelberger, M. Müller, D. Pomerantes, and M. H. Gross, “Optimized spatial hashing for collision detection of deformable objects,” in Proc. Vis., Model., Visual. Conf., 2003, vol. 3, pp. 47–54.
[39] H.-L. Chen and Y.-I. Chang, “Neighbor-finding based on space-filling curves,” Inf. Syst., vol. 30, no. 3, pp. 205–226, 2005.
[40] H.-L. Chen and Y.-I. Chang, “All-nearest-neighbors finding based on the Hilbert curve,” Expert Syst. Appl., vol. 38, no. 6, pp. 7462–7475, 2011.
[41] N. Carlevaris-Bianco, A. K. Ushani, and R. M. Eustice, “University of Michigan North Campus long-term vision and LiDAR dataset,” Int. J. Robot. Res., vol. 35, no. 9, pp. 1023–1035, 2015.
[42] W. Wen et al., “UrbanLoco: A full sensor suite dataset for mapping and localization in urban scenes,” in Proc. IEEE Int. Conf. Robot. Automat., 2020, pp. 2310–2316.
[43] Z. Yan, L. Sun, T. Krajnik, and Y. Ruichek, “EU long-term dataset with multiple sensors for autonomous driving,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2020, pp. 10697–10704.
[44] R. B. Rusu and S. Cousins, “3D is here: Point cloud library (PCL),” in Proc. IEEE Int. Conf. Robot. Automat., 2011, pp. 1–4.
[45] J. Johnson, M. Douze, and H. Jégou, “Billion-scale similarity search with GPUs,” IEEE Trans. Big Data, vol. 7, no. 3, pp. 535–547, Jul. 2017, arXiv:1702.08734.
[46] L. D. Boytsov, Y. Novak, A. Malkov, and E. Nyberg, “Off the beaten path: Let’s replace term-based retrieval with k-NN search,” in Proc. 25th ACM Int. Conf. Inf. Knowl. Manage., S. Mukhopadhyay, C. Zhai, E. F. Bertino, J. Crestani, J. Mostafa Tang, L. Si, X. Zhou, Y. Chang, Y. Li, and P. Sondhi, Eds., Indianapolis, IN, USA, ACM, 2016, pp. 1099–1108. [Online]. Available: https://doi.org/10.1145/2983323.2983815
[47] J. L. Blanco and P. K. Rai, “nanoflann: A C+ header-only fork of FLANN, a library for Nearest Neighbor (NN) with KD-trees,” 2014. [Online]. Available: https://github.com/jlblancoc/nanoflann

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/752022
推荐阅读
相关标签
  

闽ICP备14008679号