当前位置:   article > 正文

ubutu下ros2实现小车仿真建模与目标检测_ros2如何运行yolo

ros2如何运行yolo

1.安装ros2

这里使用小鱼的一键安装,根据自己的喜好安装,博主用的是ros2的foxy版本

wget http://fishros.com/install -O fishros && . fishros 

2.下载代码(这里使用的是古月居的代码)

https://book.guyuehome.com/

可以结合古月居的B站视频来自己一步一步操作,里面有讲解基础理论与一些环境的配置

https://www.bilibili.com/video/BV16B4y1Q7jQ?p=1&vd_source=7ab152ebd2f75f63466b8dc7d78d3cf2

3.下载yolov5的代码

https://github.com/fishros/yolov5_ros2/tree/main

将古月居下载的代码与yolov5的代码一起放入一个文件夹下

4.打开终端安装依赖

  1. sudo apt update
  2. sudo apt install python3-pip ros-<ros2-distro>-vision-msgs # <ros2-distro>替换为humble,foxy或galactic等ros2发行版
  3. pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple yolov5

注:如果pip3的安装命令出现报错如下:

 这里就代表下载torch时网速慢,需要换源下载,这里我用的是去下载了torch的离线包自己进行安装https://download.pytorch.org/whl/cu116,下载安装包之前自己要先安装cuda

 进入torch后使用Ctrl+f进行搜索,cuda对应的torch版本和你的python版本,一键安装ros2的时候会下载一个python2与python3,在终端输入python3就会得到python的版本

 

从网上自己搜索torch对应版本的torchvision版本

 在这里右键打开终端进行安装,torchvision同理,要学会常用tab建去补全命令

 然后重新下载依赖

pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple yolov5

5.修改下载的yolov5_ros2下的yolo_detect_2d.py代码

  1. from math import frexp
  2. from traceback import print_tb
  3. from torch import imag
  4. from yolov5 import YOLOv5
  5. import rclpy
  6. from rclpy.node import Node
  7. from ament_index_python.packages import get_package_share_directory
  8. from rcl_interfaces.msg import ParameterDescriptor
  9. from vision_msgs.msg import Detection2DArray, ObjectHypothesisWithPose, Detection2D
  10. from sensor_msgs.msg import Image, CameraInfo
  11. from cv_bridge import CvBridge
  12. import cv2
  13. import yaml
  14. from yolov5_ros2.cv_tool import px2xy
  15. package_share_directory = get_package_share_directory('yolov5_ros2')
  16. # package_share_directory = "/home/mouse/code/github/yolov
  17. #
  18. # 5_test/src/yolov5_ros2"
  19. class YoloV5Ros2(Node):
  20. def __init__(self):
  21. super().__init__('yolov5_ros2')
  22. self.declare_parameter("device", "cuda", ParameterDescriptor(
  23. name="device", description="calculate_device default:cpu optional:cuda:0"))
  24. self.declare_parameter("model", "yolov5s", ParameterDescriptor(
  25. name="model", description="default: yolov5s.pt"))
  26. self.declare_parameter("image_topic", "/image_raw", ParameterDescriptor(
  27. name="image_topic", description=f"default: /image_raw"))
  28. # /camera/image_raw
  29. self.declare_parameter("camera_info_topic", "/camera/camera_info", ParameterDescriptor(
  30. name="camera_info_topic", description=f"default: /camera/camera_info"))
  31. # 默认从camera_info中读取参数,如果可以从话题接收到参数则覆盖文件中的参数
  32. self.declare_parameter("camera_info_file", f"{package_share_directory}/config/camera_info.yaml", ParameterDescriptor(
  33. name="camera_info", description=f"{package_share_directory}/config/camera_info.yaml"))
  34. # 默认显示识别结果
  35. self.declare_parameter("show_result", True, ParameterDescriptor(
  36. name="show_result", description=f"default: True"))
  37. # 1.load model
  38. model_path = package_share_directory + "/config/" + self.get_parameter('model').value + ".pt"
  39. device = self.get_parameter('device').value
  40. self.yolov5 = YOLOv5(model_path=model_path, device=device)
  41. # 2.create publisher
  42. self.yolo_result_pub = self.create_publisher(
  43. Detection2DArray, "yolo_result", 10)
  44. self.result_msg = Detection2DArray()
  45. # 3.create sub image (if 3d, sub depth, if 2d load camera info)
  46. image_topic = self.get_parameter('image_topic').value
  47. self.image_sub = self.create_subscription(
  48. Image, image_topic, self.image_callback, 10)
  49. camera_info_topic = self.get_parameter('camera_info_topic').value
  50. self.camera_info_sub = self.create_subscription(
  51. CameraInfo, camera_info_topic, self.camera_info_callback, 1)
  52. # get camera info
  53. with open(self.get_parameter('camera_info_file').value) as f:
  54. self.camera_info = yaml.full_load(f.read())
  55. print(self.camera_info['k'], self.camera_info['d'])
  56. # 4.convert cv2 (cvbridge)
  57. self.bridge = CvBridge()
  58. self.show_result = self.get_parameter('show_result').value
  59. def camera_info_callback(self, msg: CameraInfo):
  60. """
  61. 通过回调函数获取到相机的参数信息
  62. """
  63. self.camera_info['k'] = msg.k
  64. self.camera_info['p'] = msg.p
  65. self.camera_info['d'] = msg.d
  66. self.camera_info['r'] = msg.r
  67. self.camera_info['roi'] = msg.roi
  68. self.camera_info_sub.destroy()
  69. def image_callback(self, msg: Image):
  70. # 5.detect pub result
  71. image = self.bridge.imgmsg_to_cv2(msg)
  72. detect_result = self.yolov5.predict(image)
  73. self.get_logger().info(str(detect_result))
  74. self.result_msg.detections.clear()
  75. self.result_msg.header.frame_id = "camera"
  76. self.result_msg.header.stamp = self.get_clock().now().to_msg()
  77. # parse results
  78. predictions = detect_result.pred[0]
  79. boxes = predictions[:, :4] # x1, y1, x2, y2
  80. scores = predictions[:, 4]
  81. categories = predictions[:, 5]
  82. for index in range(len(categories)):
  83. name = detect_result.names[int(categories[index])]
  84. detection2d = Detection2D()
  85. detection2d.tracking_id = name
  86. # detection2d.bbox
  87. x1, y1, x2, y2 = boxes[index]
  88. x1 = int(x1)
  89. y1 = int(y1)
  90. x2 = int(x2)
  91. y2 = int(y2)
  92. center_x = (x1+x2)/2.0
  93. center_y = (y1+y2)/2.0
  94. # detection2d.bbox.center.position.x = center_x
  95. # detection2d.bbox.center.position.y = center_y
  96. # galactic使用如下center坐标,否则会报错:Pose2D object has no attribute position
  97. # 其它版本未验证
  98. # 参考http://docs.ros.org/en/api/vision_msgs/html/msg/BoundingBox2D.html 及 http://docs.ros.org/en/api/geometry_msgs/html/msg/Pose2D.html
  99. detection2d.bbox.center.x = center_x
  100. detection2d.bbox.center.y = center_y
  101. detection2d.bbox.size_x = float(x2-x1)
  102. detection2d.bbox.size_y = float(y2-y1)
  103. obj_pose = ObjectHypothesisWithPose()
  104. obj_pose.id = name
  105. obj_pose.score = float(scores[index])
  106. # px2xy
  107. world_x, world_y = px2xy(
  108. [center_x, center_y], self.camera_info["k"], self.camera_info["d"], 1)
  109. obj_pose.pose.pose.position.x = world_x
  110. obj_pose.pose.pose.position.y = world_y
  111. # obj_pose.pose.pose.position.z = 1.0 #2D相机则显示,归一化后的结果,用户用时自行乘上深度z获取正确xy
  112. detection2d.results.append(obj_pose)
  113. self.result_msg.detections.append(detection2d)
  114. # draw
  115. if self.show_result:
  116. cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), 2)
  117. cv2.putText(image, name, (x1, y1),
  118. cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1)
  119. cv2.imshow('result', image)
  120. cv2.waitKey(1)
  121. # if view or pub
  122. if self.show_result:
  123. cv2.imshow('result', image)
  124. cv2.waitKey(1)
  125. print("before publish out if")
  126. if len(categories) > 0:
  127. self.yolo_result_pub.publish(self.result_msg)
  128. def main():
  129. rclpy.init()
  130. rclpy.spin(YoloV5Ros2())
  131. rclpy.shutdown()
  132. if __name__ == "__main__":
  133. main()

6.编译工作空间

 在src外打开终端进行编译

colcon build

使用colcon build进行编译src文件夹下的代码

 搜寻本地编译

source install/local_setup.bash

如果想随时随地使用,打开主目录,使用ctrl+h打开隐藏文件找到.bashrc文件打开,插入一行

7.运行仿真建模

ros2 launch learning_gazebo load_mbot_camera_into_gazebo.launch.py

 开启另一个终端

ros2 run yolov5_ros2 yolo_detect_2d --ros-args -p device:=cpu -p image_topic:=/camera/image_raw

注:image_topic:=后是相机发布的话题名,将话题名改为相机所发布的话题名一致就能实现目标检测了,效果如下

 可以通过开启另一个终端下输入以下命令实现小车的移动

ros2 run teleop_twist_keyboard teleop_twist_keyboard

 在上面终端中按i是前进,j是向左旋转视角,l是向右旋转,k是停止移动,,(逗号)为后退。

博客会不定期修改与更新,建议收藏!!!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Cpp五条/article/detail/370513
推荐阅读
相关标签
  

闽ICP备14008679号