当前位置:   article > 正文

ORB-SLAM3算法复现及遇到的问题解决 T265_orbslam3复现

orbslam3复现

GitHub - zhaozhongch/orbslam3_ros

https://github.com/UZ-SLAMLab/ORB_SLAM3

安装ORB-SLAM3教程_Smile_HT的博客-CSDN博客

参考以上作者

环境:ubuntu18.04 ROS melodic 原作者ORB-SLAM3

一、编译源码

将CMakelists文件中opencv4.4改为3.2

  1. chmod +x build.sh
  2. ./build.sh

二、非ROS运行KITTI 双目数据集

/ORB_SLAM3/Examples/Stereo路径下

./stereo_kitti path_to_vocabulary path_to_settings path_to_sequence
./stereo_kitti ../../Vocabulary/ORBvoc.txt ./KITTI1.yaml /home/flycar/dataset/kitti/stereo

KITTI1.yaml如下

  1. %YAML:1.0
  2. #----------------------------------------------------------------
  3. # Camera Parameters. Adjust them!
  4. #-----------------------------------------------------------------
  5. #针孔相机
  6. Camera.type: "PinHole"
  7. #鱼眼相机
  8. #Camera.type: "KannalaBrandt8"
  9. # Camera calibration and distortion parameters (OpenCV)
  10. Camera.fx: 718.856
  11. Camera.fy: 718.856
  12. Camera.cx: 607.1928
  13. Camera.cy: 185.2157
  14. Camera.k1: 0.0
  15. Camera.k2: 0.0
  16. Camera.p1: 0.0
  17. Camera.p2: 0.0
  18. Camera.width: 1241
  19. Camera.height: 376
  20. #基线长度
  21. Camera.bf: 0.53716
  22. #深度阈值
  23. ThDepth: 35.0
  24. # Camera frames per second
  25. Camera.fps: 10.0
  26. # Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
  27. # Camera.RGB: 1
  28. #--------------------------------------------------------------------
  29. # ORB Parameters
  30. #---------------------------------------------------------------------
  31. # ORB Extractor: Number of features per image
  32. ORBextractor.nFeatures: 5000
  33. # ORB Extractor: Scale factor between levels in the scale pyramid
  34. #相邻层图像的比例系数
  35. ORBextractor.scaleFactor: 1.2
  36. # ORB Extractor: Number of levels in the scale pyramid
  37. #构造金字塔的层数
  38. ORBextractor.nLevels: 8
  39. # ORB Extractor: Fast threshold
  40. # Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
  41. # Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
  42. # You can lower these values if your images have low contrast
  43. #检测fast角点阈值
  44. #没有检测到角点的前提下降低阈值
  45. ORBextractor.iniThFAST: 20
  46. ORBextractor.minThFAST: 7
  47. #-----------------------------------------------------------------
  48. # Viewer Parameters
  49. #-----------------------------------------------------------------
  50. #可视化参数
  51. Viewer.KeyFrameSize: 0.1
  52. Viewer.KeyFrameLineWidth: 1
  53. Viewer.GraphLineWidth: 1
  54. Viewer.PointSize: 2
  55. Viewer.CameraSize: 0.15
  56. Viewer.CameraLineWidth: 2
  57. Viewer.ViewpointX: 0
  58. Viewer.ViewpointY: -100
  59. Viewer.ViewpointZ: -0.1
  60. Viewer.ViewpointF: 2000

三、非ROS运行TUM RGB-D数据集

/ORB_SLAM3/Examples/RGB-D路径下

./rgbd_tum path_to_vocabulary path_to_settings path_to_sequence path_to_association

注意在使用TUM数据集前需要以下操作对齐数据,在数据集文件夹目录下

python 1.py rgb.txt depth.txt > accelerometer.txt 
1.py文件如下

  1. #!/usr/bin/python
  2. # Software License Agreement (BSD License)
  3. #
  4. # Copyright (c) 2013, Juergen Sturm, TUM
  5. # All rights reserved.
  6. #
  7. # Redistribution and use in source and binary forms, with or without
  8. # modification, are permitted provided that the following conditions
  9. # are met:
  10. #
  11. # * Redistributions of source code must retain the above copyright
  12. # notice, this list of conditions and the following disclaimer.
  13. # * Redistributions in binary form must reproduce the above
  14. # copyright notice, this list of conditions and the following
  15. # disclaimer in the documentation and/or other materials provided
  16. # with the distribution.
  17. # * Neither the name of TUM nor the names of its
  18. # contributors may be used to endorse or promote products derived
  19. # from this software without specific prior written permission.
  20. #
  21. # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
  22. # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
  23. # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
  24. # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
  25. # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
  26. # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
  27. # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
  28. # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
  29. # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
  30. # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
  31. # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
  32. # POSSIBILITY OF SUCH DAMAGE.
  33. #
  34. # Requirements:
  35. # sudo apt-get install python-argparse
  36. """
  37. The Kinect provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images.
  38. For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches.
  39. """
  40. import argparse
  41. import sys
  42. import os
  43. import numpy
  44. def read_file_list(filename):
  45. """
  46. Reads a trajectory from a text file.
  47. File format:
  48. The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)
  49. and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp.
  50. Input:
  51. filename -- File name
  52. Output:
  53. dict -- dictionary of (stamp,data) tuples
  54. """
  55. file = open(filename)
  56. data = file.read()
  57. lines = data.replace(","," ").replace("\t"," ").split("\n")
  58. list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"]
  59. list = [(float(l[0]),l[1:]) for l in list if len(l)>1]
  60. return dict(list)
  61. def associate(first_list, second_list,offset,max_difference):
  62. """
  63. Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim
  64. to find the closest match for every input tuple.
  65. Input:
  66. first_list -- first dictionary of (stamp,data) tuples
  67. second_list -- second dictionary of (stamp,data) tuples
  68. offset -- time offset between both dictionaries (e.g., to model the delay between the sensors)
  69. max_difference -- search radius for candidate generation
  70. Output:
  71. matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))
  72. """
  73. first_keys = first_list.keys()
  74. second_keys = second_list.keys()
  75. potential_matches = [(abs(a - (b + offset)), a, b)
  76. for a in first_keys
  77. for b in second_keys
  78. if abs(a - (b + offset)) < max_difference]
  79. potential_matches.sort()
  80. matches = []
  81. for diff, a, b in potential_matches:
  82. if a in first_keys and b in second_keys:
  83. first_keys.remove(a)
  84. second_keys.remove(b)
  85. matches.append((a, b))
  86. matches.sort()
  87. return matches
  88. if __name__ == '__main__':
  89. # parse command line
  90. parser = argparse.ArgumentParser(description='''
  91. This script takes two data files with timestamps and associates them
  92. ''')
  93. parser.add_argument('first_file', help='first text file (format: timestamp data)')
  94. parser.add_argument('second_file', help='second text file (format: timestamp data)')
  95. parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')
  96. parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0)
  97. parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.02)
  98. args = parser.parse_args()
  99. first_list = read_file_list(args.first_file)
  100. second_list = read_file_list(args.second_file)
  101. matches = associate(first_list, second_list,float(args.offset),float(args.max_difference))
  102. if args.first_only:
  103. for a,b in matches:
  104. print("%f %s"%(a," ".join(first_list[a])))
  105. else:
  106. for a,b in matches:
  107. print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))

./rgbd_tum ../../Vocabulary/ORBvoc.txt ./TUM1.yaml /home/flycar/dataset/TUM/rgbd_dataset_freiburg2_pioneer_slam/ /home/flycar/dataset/TUM/rgbd_dataset_freiburg2_pioneer_slam/accelerometer.txt

四、ROS下运行KITTI数据集

下载所需包:链接: https://pan.baidu.com/s/1HPaD0h6D8lVmsy1_Qus8aw 提取码: qikq 

  1. 安装第三方库
  2. cd orbslam3_ros
  3. ./build_thrid_party.sh
  4. 安装Pangolin0.6
  5. cd Pangolin0.6
  6. mkdir build
  7. cd build
  8. cmake ..
  9. sudo make -j16
  10. sudo make install
  11. 编译ORB-SLAM3
  12. cd ../..
  13. catkin_make
  14. 在bag包运行ORB-SLAM3
  15. roscore
  16. cd orbslam3/
  17. source devel/setup.bash
  18. rosrun orbslam3 ros_stereo_inertial src/orbslam3_ros/Vocabulary/ORBvoc.txt src/orbslam3_ros/Examples/Stereo-Inertial/EuRoC.yaml true
  19. rosbag play /home/flycar/bag/MH_03_medium.bag /cam0/image_raw:=/gray_image0 /cam1/image_raw:=/gray_image1 /imu0:=/gx5/imu/data

五、usb摄像头 实时运行ORBSLAM3

修改../orbslam3/src/orbslam3_ros/Examples/ROS/src 路径下ros_mono.cc文件如图所示位置。

重新编译工程

cd orbslam3/

catkin_make

  1. 安装 运行摄像头
  2. sudo apt-get install ros-melodic-usb-cam
  3. roslaunch usb_cam usb_cam-test.launch
  4. roscore
  5. cd orbslam3/
  6. source devel/setup.bash
  7. rosrun orbslam3 ros_mono src/orbslam3_ros/Vocabulary/ORBvoc.txt
  8. src/orbslam3_ros/Examples/Monocular/EuRoC.yaml

六、T265 运行 ORB-SLAM3

Intel Realsense T265使用教程_熊猫飞天的博客-CSDN博客_t265

修改 ../orbslam3/src/orbslam3_ros/Examples/ROS/src/下ros_stereo.cc

 重新编译

cd orbslam3

catkin_make

  1. roscore
  2. roslaunch realsense2_camera demo_t265.launch
  3. cd orbslam3
  4. source devel/setup.bash
  5. rosrun orbslam3 ros_stereo src/orbslam3_ros/Vocabulary/ORBvoc.txt src/orbslam3_ros/Examples/Stereo/EuRoC.yaml true

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】
推荐阅读
相关标签
  

闽ICP备14008679号