当前位置:   article > 正文

Apollo perception源码阅读 | lidar_ccrfoneshottypefusion

ccrfoneshottypefusion

#! https://zhuanlan.zhihu.com/p/389572382

Apollo perception源码阅读 | lidar

本文为Apollo感知融合源码阅读笔记,建议参照Apollo6.0源码阅读本文,水平有限,有错误的地方希望大佬多加指正!
个人代码注释链接: leox24 / perception_learn

Lidar模块

这应该是一篇巨长的文章
主要是激光雷达检测的框架代码

0 部分参考文章

1 目录结构

lidar
├── app //接口,以前的CNN分割,现在的PointPillars检测,以及分类跟踪
│   ├── BUILD
│   ├── lidar_app_lidar_pipeline_test.cc
│   ├── lidar_obstacle_detection.cc
│   ├── lidar_obstacle_detection.h
│   ├── lidar_obstacle_segmentation.cc
│   ├── lidar_obstacle_segmentation.h
│   ├── lidar_obstacle_tracking.cc
│   ├── lidar_obstacle_tracking.h
│   └── proto
├── common
├── lib
│   ├── classifier
│   ├── detection
│   ├── dummy             
│   ├── ground_detector
│   ├── interface
│   ├── map_manager
│   ├── object_builder
│   ├── object_filter_bank
│   ├── pointcloud_preprocessor
│   ├── roi_filter
│   ├── scene_manager
│   ├── segmentation
│   └── tracker
├── README.md
└── tools

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

2 入口

  • launch文件:Apollo/modules/perception/production/launch/perception_lidar.launch
  • 对应的dag文件:Apollo/modules/perception/production/dag/dag_streaming_perception_lidar.dag
    其中包括DetectionComponent, RecognitionComponent, FusionComponent, V2XFusionComponent四个组件类,即检测,识别跟踪、融合、车联网融合。单对于lidar模块,主要就是检测和识别跟踪两个组件类的具体实现,融合和车联网融合是lidar模块输出结果的后续处理。

因此Apollo6.0的lidar模块的处理过程就是:DetectionComponent(检测) >> RecognitionComponent(识别跟踪)。

Apollo/modules/perception/onboard/component/detection_component.cc
Apollo/modules/perception/onboard/component/recognition_component.cc
  • 1
  • 2

和Apollo5的lidar模块不同的是用DetectionComponent组件替换了SegmentationComponent组件,新增使用了PointPillars模型来处理激光雷达原始点云。所以在流程上会有一些区别。

2.1 DetectionComponent类
  • DetectionComponent::Init()
    初始化参数
bool DetectionComponent::Init() {
  LidarDetectionComponentConfig comp_config;
  if (!GetProtoConfig(&comp_config)) {
    return false;
  }

  /**配置文件参数
   * Apollo/modules/perception/production/conf/perception/lidar/velodyne128_detection_conf.pb.txt:
   * sensor_name: "velodyne128"
   * enable_hdmap: true
   * lidar_query_tf_offset: 0
   * lidar2novatel_tf2_child_frame_id: "velodyne128"
   * output_channel_name: "/perception/inner/DetectionObjects"
  */
  // 初始化成员算法类
  if (!InitAlgorithmPlugin()) {
    AERROR << "Failed to init detection component algorithm plugin.";
    return false;
  }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • DetectionComponent::InternalProc()

    • 输入:drivers::PointCloud原始点云
    • 输出:LidarFrameMessaglidar处理结果

    out_message->lidar_frame_作为输入传引用到后面的处理算法中,也就是最终的检测结果。这里的坐标转换(传感器到世界和传感器到自车),后面会经常用到,是通过TransformWrapper类实时获取最新参数,这个类主要是管理传感器外参和自车全局位姿,具体实现可自行查阅。

bool DetectionComponent::InternalProc(
    const std::shared_ptr<const drivers::PointCloud>& in_message,
    const std::shared_ptr<LidarFrameMessage>& out_message) {
  ...
  auto& frame = out_message->lidar_frame_;
  Eigen::Affine3d pose = Eigen::Affine3d::Identity();
  // lidar_query_tf_offset_ = 0
  const double lidar_query_tf_timestamp =
      timestamp - lidar_query_tf_offset_ * 0.001;
  // 获取转世界坐标系的pose
  if (!lidar2world_trans_.GetSensor2worldTrans(lidar_query_tf_timestamp,
                                               &pose)) {
    return false;
  }

  // 赋值,当前时刻转世界坐标系的转换矩阵
  frame->lidar2world_pose = pose;

  // 传感器转自车的外参转换矩阵
  lidar::LidarObstacleDetectionOptions detect_opts;
  detect_opts.sensor_name = sensor_name_;
  lidar2world_trans_.GetExtrinsics(&detect_opts.sensor2novatel_extrinsics);

  //lidar detection
  // 处理函数,lidar检测 LidarObstacleDetection
  lidar::LidarProcessResult ret =
      detector_->Process(detect_opts, in_message, frame.get());
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
2.1 RecognitionComponent
  • RecognitionComponent::InternalProc()
    • 输入:LidarFrameMessag
    • 输出:SensorFrameMessage,该结果发给融合fusion
bool RecognitionComponent::InternalProc(
    const std::shared_ptr<const LidarFrameMessage>& in_message,
    const std::shared_ptr<SensorFrameMessage>& out_message) {
  ...
  auto& lidar_frame = in_message->lidar_frame_;

  // 处理检测结果,进行跟踪 LidarObstacleTracking
  lidar::LidarProcessResult ret =
      tracker_->Process(track_options, lidar_frame.get());
      
  // 赋值到输出结果上
  out_message->hdmap_ = lidar_frame->hdmap_struct;
  auto& frame = out_message->frame_;
  frame->sensor_info = lidar_frame->sensor_info;
  frame->timestamp = in_message->timestamp_;
  frame->objects = lidar_frame->tracked_objects;
  ...
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

接下来单独对两个模块分析
下文的参数配置基本在apollo/modules/perception/production/data/perception/lidar/models/目录下,寻找初始化函数里相应的conf文件即可。

3 检测-DetectionComponent

  • LidarObstacleDetection类-检测入口
  • LidarObstacleDetection::Init()
    参数初始化
bool LidarObstacleDetection::Init(
    const LidarObstacleDetectionInitOptions& options)
  /*
  detector: "PointPillarsDetection"
  use_map_manager: true
  use_object_filter_bank: true
  */

  // 点云预处理初始化
  PointCloudPreprocessorInitOptions preprocessor_init_options;
  preprocessor_init_options.sensor_name = sensor_name;
  ACHECK(cloud_preprocessor_.Init(preprocessor_init_options));

  // pointpillars检测初始化
  detector_.reset(new PointPillarsDetection);
  CHECK_NOTNULL(detector_.get());
  DetectionInitOptions detection_init_options;
  ACHECK(detector_->Init(detection_init_options));
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • LidarObstacleDetection::Process()
    处理函数,流程为:预处理->模型推理
LidarProcessResult LidarObstacleDetection::Process(
    const LidarObstacleDetectionOptions& options,
    const std::shared_ptr<apollo::drivers::PointCloud const>& message,
    LidarFrame* frame) {
  const auto& sensor_name = options.sensor_name;
  PointCloudPreprocessorOptions preprocessor_options;
  // 传感器转GPS的外参
  preprocessor_options.sensor2novatel_extrinsics =
      options.sensor2novatel_extrinsics;
  // 原始点云预处理
  if (cloud_preprocessor_.Preprocess(preprocessor_options, message, frame)) {
    // 模型推理
    return ProcessCommon(options, frame);
  }
  return LidarProcessResult(LidarErrorCode::PointCloudPreprocessorError,
                            "Failed to preprocess point cloud.");
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
3.1 点云预处理

cloud_preprocessor_.Preprocess()以为有很高级的预处理,就是简单的过滤了自车车身的点云,和保存了转到世界坐标系的点云。配置项里的filter_naninf_pointsfilter_high_z_points都是false,所以下面代码我直接删掉了关于这两个的处理。

  • PointCloudPreprocessor::Preprocess()
bool PointCloudPreprocessor::Preprocess(
    const PointCloudPreprocessorOptions& options,
    const std::shared_ptr<apollo::drivers::PointCloud const>& message,
    LidarFrame* frame) const {
  if (frame == nullptr) {
    return false;
  }
  if (frame->cloud == nullptr) {
    frame->cloud = base::PointFCloudPool::Instance().Get();
  }
  if (frame->world_cloud == nullptr) {
    frame->world_cloud = base::PointDCloudPool::Instance().Get();
  }
  frame->cloud->set_timestamp(message->measurement_time());
  if (message->point_size() > 0) {
    frame->cloud->reserve(message->point_size());
    base::PointF point;
    for (int i = 0; i < message->point_size(); ++i) {
      const apollo::drivers::PointXYZIT& pt = message->point(i);
    }
    Eigen::Vector3d vec3d_lidar(pt.x(), pt.y(), pt.z());
    // 转到车体坐标系
    Eigen::Vector3d vec3d_novatel =
        options.sensor2novatel_extrinsics * vec3d_lidar;
    // 过滤车身box的点云
    if (filter_nearby_box_points_ && vec3d_novatel[0] < box_forward_x_ &&
        vec3d_novatel[0] > box_backward_x_ &&
        vec3d_novatel[1] < box_forward_y_ &&
        vec3d_novatel[1] > box_backward_y_) {
      continue;
    }
    point.x = pt.x();
    point.y = pt.y();
    point.z = pt.z();
    point.intensity = static_cast<float>(pt.intensity());
    frame->cloud->push_back(point, static_cast<double>(pt.timestamp()) * 1e-9,
                            std::numeric_limits<float>::max(), i, 0);
  }
    // 保存转到世界坐标系的点云
    TransformCloud(frame->cloud, frame->lidar2world_pose, frame->world_cloud);
  return true;
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
3.2 模型推理

点云预处理后就是PointPillarsDetection类的Detect函数

这个类里面有些FLAGS参数使用gflag宏定义FLAGS_##name生成的,关于参数的定义在Apollo/modules/perception/production/conf/perception/perception_common.flag

宏定义生成在Apollo/modules/perception/common/perception_gflags.h,flags参数设置在perception_gflags.cc

基本上很多参数都是false,保证送入模型推理的是原点云,保证得到更精确的检测结果。所以该函数简化就是如下,基本过程就是:过滤范围外点云并将点云转换为一维数组 >> 推理输出box和type >> 保存box结果和类别属性

  • PointPillarsDetection::Detect()
bool PointPillarsDetection::Detect(const DetectionOptions& options,
                                   LidarFrame* frame) {

  // record input cloud and lidar frame
  original_cloud_ = frame->cloud;
  original_world_cloud_ = frame->world_cloud;
  lidar_frame_ref_ = frame;

  // check output
  frame->segmented_objects.clear();

  int num_points;
  // 原始点云指针
  cur_cloud_ptr_ = std::shared_ptr<base::PointFCloud>(
      new base::PointFCloud(*original_cloud_));

  num_points = cur_cloud_ptr_->size();

  // point cloud to array = num_points * 5
  float* points_array = new float[num_points * FLAGS_num_point_feature]();

  // x,y,z,i,deltatime,5个特征值,排成列。过滤掉设置范围外的点云。
  CloudToArray(cur_cloud_ptr_, points_array, FLAGS_normalizing_factor);//255

  // inference 推理
  std::vector<float> out_detections;
  std::vector<int> out_labels;
  point_pillars_ptr_->DoInference(points_array, num_points, &out_detections,
                                  &out_labels);

  // transfer output bounding boxes to objects
  // 计算box的顶点(旋转朝向),box顶点转到自车坐标系和世界坐标系、类别属性保存到frame->segmented_objects
  GetObjects(&frame->segmented_objects, frame->lidar2world_pose,
             &out_detections, &out_labels);

  delete[] points_array;
  return true;  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • PointPillars::DoInference()

TODO:涉及到cuda编程,不是很熟悉,PointPillars论文也没什么好讲的,后续有缘更新

4 跟踪分类-RecognitionComponent

  • LidarObstacleTracking类-跟踪处理入口
  • LidarObstacleTracking::Init()
    参数初始化
bool LidarObstacleTracking::Init(
    const LidarObstacleTrackingInitOptions& options);

apollo/modules/perception/production/data/perception/lidar/models/lidar_obstacle_pipeline/velodyne128/lidar_obstacle_detection.conf
multi_target_tracker: "MlfEngine"
fusion_classifier: "FusedClassifier"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • LidarObstacleTracking::Process()
    处理函数,分为了跟踪和分类两个处理部分:
LidarProcessResult LidarObstacleTracking::Process(
    const LidarObstacleTrackingOptions& options, LidarFrame* frame){
  
  MultiTargetTrackerOptions tracker_options;
  // 跟踪
  if (!multi_target_tracker_->Track(tracker_options, frame)) {
    return LidarProcessResult(LidarErrorCode::TrackerError,
                              "Fail to track objects.");
  }

  ClassifierOptions fusion_classifier_options;
  // 分类
  if (!fusion_classifier_->Classify(fusion_classifier_options, frame)) {
    return LidarProcessResult(LidarErrorCode::ClassifierError,
                              "Fail to fuse object types.");
  }

}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
4.1 跟踪
  • MlfEngine类-跟踪的实现
  • MlfEngine::Init()
    初始化参数配置,匹配器,跟踪器和评估器。评估器后面并没有用,被注释了。整体流程和融合的跟踪是相似的,所有就省略了很多地方的注释,但是细节上还是有些区别。
bool MlfEngine::Init(const MultiTargetTrackerInitOptions& options)
  // 参数配置
  main_sensor: "velodyne128"
  use_histogram_for_match: true
  histogram_bin_size: 10
  output_predict_objects: false
  reserved_invisible_time: 0.3
  use_frame_timestamp: true

   // 匹配器
  matcher_.reset(new MlfTrackObjectMatcher);
  MlfTrackObjectMatcherInitOptions matcher_init_options;
  ACHECK(matcher_->Init(matcher_init_options));

  // 跟踪器
  tracker_.reset(new MlfTracker);
  MlfTrackerInitOptions tracker_init_options;
  ACHECK(tracker_->Init(tracker_init_options));
  
  // 评估器
  evaluator_.Init();

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • MlfEngine::Track()
    核心函数!!!关于跟踪函数,源代码有详细的步骤注释,一共7步。

  • (0) 调整分割物体的时间戳

  // 0. modify objects timestamp if necessary
  // 时间戳赋值
  if (use_frame_timestamp_) {
    for (auto& object : frame->segmented_objects) {
      object->latest_tracked_time = frame->timestamp;
    }
  }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • (1) 对lidar2World做平移得到sensor2local转换矩阵,即局部世界坐标系
  // 1. add global offset to pose (only when no track exists)
  // 没有track时即初始化,保留初始化时的平移向量T,sensor2local也就是lidar2world平移了-T向量
  // 也就是认为初始化时是local坐标系起点是原点。
  if (foreground_track_data_.empty() && background_track_data_.empty()) {
    // -T矩阵 3*1
    global_to_local_offset_ = -frame->lidar2world_pose.translation();
  }
  sensor_to_local_pose_ = frame->lidar2world_pose;
  // 平移。初始时刻抵消了T矩阵,只有R矩阵
  sensor_to_local_pose_.pretranslate(global_to_local_offset_);
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • (2) 分开前景和背景,转换成其他格式障碍物
    转换到local坐标系,分别赋值给foreground/background_track_data_,计算点云的直方图分布特征,后续用于计算关联值。这里的直方图,看代码计算过程我理解为每个点坐标xyz在每个聚类点云里面的分布特征,是在xyz方向上一共计算出来的30个数值。
  // 2. split fg and bg objects, and transform to tracked objects
  // 转换到local坐标系,分别赋值给foreground/background_track_data_。
  SplitAndTransformToTrackedObjects(frame->segmented_objects,
                                  frame->sensor_info);
  |
  ^                                  
void MlfEngine::SplitAndTransformToTrackedObjects(
    const std::vector<base::ObjectPtr>& objects,
    const base::SensorInfo& sensor_info) {
  std::vector<TrackedObjectPtr> tracked_objects;
  TrackedObjectPool::Instance().BatchGet(objects.size(), &tracked_objects);
  foreground_objects_.clear();
  background_objects_.clear();
  for (size_t i = 0; i < objects.size(); ++i) {
    // 转换到local坐标系下,赋值为TrackedObject格式
    tracked_objects[i]->AttachObject(objects[i], sensor_to_local_pose_,
                                     global_to_local_offset_, sensor_info);
    if (!objects[i]->lidar_supplement.is_background &&
        use_histogram_for_match_) {
      tracked_objects[i]->histogram_bin_size = histogram_bin_size_;//10
      // 计算前景点云的直方图分布特征,后续用于计算关联值。(Histogram暂且直译)
      tracked_objects[i]->ComputeShapeFeatures();
    }
    // 赋值给前景和背景障碍物
    // pointpillars算出来的都是前景,计算结果直接is_background=false
    if (objects[i]->lidar_supplement.is_background) {
      background_objects_.push_back(tracked_objects[i]);
    } else {
      foreground_objects_.push_back(tracked_objects[i]);
    }
  }
}

tracked_objects[i]->ComputeShapeFeatures();
                      |
                      ^
void ComputeHistogram(int bin_size, float* feature) {
  // 计算box最近最远点,以及box中心点
  GetMinMaxCenter();
  // 10
  int xstep = bin_size;
  int ystep = bin_size;
  int zstep = bin_size;
  // 30
  int stat_len = xstep + ystep + zstep;
  std::vector<int> stat_feat(stat_len, 0);
  float xsize =
      (max_pt_.x - min_pt_.x) / static_cast<float>(xstep) + 0.000001f;
  float ysize =
      (max_pt_.y - min_pt_.y) / static_cast<float>(ystep) + 0.000001f;
  float zsize =
      (max_pt_.z - min_pt_.z) / static_cast<float>(zstep) + 0.000001f;

  int pt_num = static_cast<int>(cloud_->size());
  for (int i = 0; i < pt_num; ++i) {
    base::PointF& pt = cloud_->at(i);
    // 0-9x 10-19y 20-20z,最大值-最小值后划分10个单位,点云中的点在其中的哪个单位就+1
    // 有点类似PS里的直方图,可以看出距离最近x/y/z的点的数量
    ++stat_feat[static_cast<int>((pt.x - min_pt_.x) / xsize)];
    ++stat_feat[xstep + static_cast<int>((pt.y - min_pt_.y) / ysize)];
    ++stat_feat[xstep + ystep + static_cast<int>((pt.z - min_pt_.z) / zsize)];
  }

  // 前7个特征,后面好像没有用
  feature[0] = center_pt_.x / 10.0f;
  feature[1] = center_pt_.y / 10.0f;
  feature[2] = center_pt_.z;
  feature[3] = xsize;
  feature[4] = ysize;
  feature[5] = zsize;
  feature[6] = static_cast<float>(pt_num);
  // 后30个特征
  for (size_t i = 0; i < stat_feat.size(); ++i) {
    feature[i + 7] =
        static_cast<float>(stat_feat[i]) / static_cast<float>(pt_num);
  }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • (3) 关联匹配过程。
    计算关联矩阵,用七项特征计算当前观测和航迹一步预测的关联值,航迹预测用的是CV模型,就是简单的速度乘以时间,关联值主要包括:锚点和track距离差、前后两帧锚点xy差和track速度方向夹角、box长宽差、点云数量、直方图距离差、重心、boxIOU,注意锚点是每个类的点云中心点,在local坐标系,也就是局部世界坐标系,这个在融合里面困惑了很久。
    然后匹配过程,前景用匈牙利匹配,背景用GNN全局最近邻匹配。匈牙利匹配在融合里面讲过,这里调用的也是相同的接口实现,GNN算法比较简单,这里就不做阐述了,感兴趣可以看一下我的注释代码或者相关文章。
    得到匹配结果后,将相应的观测量保存到航迹的历史观测量里面,等待后续主传感器进来做更新,或者新建航迹。
    更正:背景的关联使用的应该是改进的全局最近邻,没有算costs,全局中关联值最小的优先一对一匹配,然后计算次小关联值的匹配。关于最近邻算法可以参考:最近邻目标关联算法
  // 3. assign tracked objects to tracks
  // 关联匹配,前景匈牙利,背景GNN
  MlfTrackObjectMatcherOptions match_options;
  TrackObjectMatchAndAssign(match_options, foreground_objects_, "foreground",
                            &foreground_track_data_);
  TrackObjectMatchAndAssign(match_options, background_objects_, "background",
                            &background_track_data_);
  |
  ^
void MlfEngine::TrackObjectMatchAndAssign(
    const MlfTrackObjectMatcherOptions& match_options,
    const std::vector<TrackedObjectPtr>& objects, const std::string& name,
    std::vector<MlfTrackDataPtr>* tracks) {
  std::vector<std::pair<size_t, size_t>> assignments;
  std::vector<size_t> unassigned_tracks;
  std::vector<size_t> unassigned_objects;
  // 前景匈牙利,背景GNN
  matcher_->Match(match_options, objects, *tracks, &assignments,
                  &unassigned_tracks, &unassigned_objects);
  AINFO << "MlfEngine: " + name + " assignments " << assignments.size()
        << " unassigned_tracks " << unassigned_tracks.size()
        << " unassigned_objects " << unassigned_objects.size();
  // 1. for assignment, push object to cache of track_data
  for (auto& pair : assignments) {
    const size_t track_id = pair.first;
    const size_t object_id = pair.second;
    tracks->at(track_id)->PushTrackedObjectToCache(objects[object_id]);
  }
  // 2. for unassigned_objects, create new tracks
  for (auto& id : unassigned_objects) {
    MlfTrackDataPtr track_data = MlfTrackDataPool::Instance().Get();
    tracker_->InitializeTrack(track_data, objects[id]);
    tracks->push_back(track_data);
  }
}
|
^
void MlfTrackObjectMatcher::Match(
    const MlfTrackObjectMatcherOptions &options,
    const std::vector<TrackedObjectPtr> &objects,
    const std::vector<MlfTrackDataPtr> &tracks,
    std::vector<std::pair<size_t, size_t>> *assignments,
    std::vector<size_t> *unassigned_tracks,
    std::vector<size_t> *unassigned_objects) {
...
  // bound_value: 100
  // max_match_distance: 4.0
  BipartiteGraphMatcherOptions matcher_options;
  matcher_options.cost_thresh = max_match_distance_;
  matcher_options.bound_value = bound_value_;

  // 前景和背景障碍物的匹配方式是不一样的
  // foreground_mathcer_method: "MultiHmBipartiteGraphMatcher"
  // background_matcher_method: "GnnBipartiteGraphMatcher"
  BaseBipartiteGraphMatcher *matcher =
      objects[0]->is_background ? background_matcher_ : foreground_matcher_;

  common::SecureMat<float> *association_mat = matcher->cost_matrix();

  association_mat->Resize(tracks.size(), objects.size());
  // 计算关联矩阵,用七项特征算关联值
  ComputeAssociateMatrix(tracks, objects, association_mat);
  // 前景用匈牙利匹配,背景用局部最近邻匹配
  matcher->Match(matcher_options, assignments, unassigned_tracks,
                 unassigned_objects);
  // 关联值转化为得分(0-1)
  for (size_t i = 0; i < assignments->size(); ++i) {
    objects[assignments->at(i).second]->association_score =
        (*association_mat)(assignments->at(i).first,
                           assignments->at(i).second) /
        max_match_distance_;
  }
}
|
^
关联值计算函数
float MlfTrackObjectDistance::ComputeDistance()
// 锚点是每个类的点云中心点,在local坐标系,也就是局部世界坐标系。
// 关联值:锚点和track距离差、前后两帧锚点xy差和track速度方向夹角、box长宽差、点云数量、直方图距离差、重心、boxIOU
// 初始时刻:0.6f, 0.2f, 0.1f, 0.1f, 0.5f, 0.f, 0.f, 0.6f
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • (4) 卡尔曼滤波器,量测更新航迹
    和之前融合的类似,但是只更新motion和shape。更新motion部分的代码和融合的流程类似,简单的CV模型,也没有怎么细看,这里就不做阐述了。
    感兴趣可以直接看我注释的代码位于lidar/lib/tracker/multi_lidar_fusion/mlf_motion_filter.cc,项目链接在文章最上方。更新shpae懒得看了…
    (可能是我没看懂,lidar的卡尔曼滤波的更新公式那里,观测Z是速度项vx+vy,H矩阵转移的却是位置项x+y???)更正:根据breakdown函数判断,状态量只有v和a,所以H矩阵没问题
  // 4. state filter in tracker if is main sensor
  // 是主传感器就卡尔曼滤波器更新tracker的状态
  bool is_main_sensor =
      (main_sensor_.find(frame->sensor_info.name) != main_sensor_.end());
  if (is_main_sensor) {
    TrackStateFilter(foreground_track_data_, frame->timestamp);
    TrackStateFilter(background_track_data_, frame->timestamp);
  }

|
^
void MlfEngine::TrackStateFilter(const std::vector<MlfTrackDataPtr>& tracks,
                                 double frame_timestamp) {
  std::vector<TrackedObjectPtr> objects;
  for (auto& track_data : tracks) {

    // [latest_visible_time_,latest_cached_time_]时间范围内的观测
    track_data->GetAndCleanCachedObjectsInTimeInterval(&objects);
    
    // 更新motion和shape,和融合类似
    for (auto& obj : objects) {
      tracker_->UpdateTrackDataWithObject(track_data, obj);
    }
    if (objects.empty()) {
      tracker_->UpdateTrackDataWithoutObject(frame_timestamp, track_data);
    }
  }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • (5) 收集跟踪结果,应该是用于发布到下游
  • (6) 移除超过生命周期的没有更新过的航迹,Apollo的生命周期的维持时间是0.3秒,超过0.3秒不更新就会被删除
 // 5. track to object if is main sensor
  // 当前的跟踪结果,用于发布给下游?
  frame->tracked_objects.clear();
  if (is_main_sensor) {
    CollectTrackedResult(frame);
  }
  // 6. remove stale data
  // 移除超过生命周期的没有更新过的航迹
  RemoveStaleTrackData("foreground", frame->timestamp, &foreground_track_data_);
  RemoveStaleTrackData("background", frame->timestamp, &background_track_data_);

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
4.2 分类
  • FusedClassifier类-分类的实现
  • Init函数
    参数配置和两个算法类的初始化,可以看出来lidar的分类主要是CCRFOneShotTypeFusionCCRFSequenceTypeFusion这两个类的具体实现了。
  // 参数配置
  one_shot_fusion_method: "CCRFOneShotTypeFusion"
  sequence_fusion_method: "CCRFSequenceTypeFusion"
  enable_temporal_fusion: true
  temporal_window: 20.0
  use_tracked_objects: true

  // one shot分类融合
  one_shot_fuser_ = BaseOneShotTypeFusionRegisterer::GetInstanceByName(
      one_shot_fusion_method_);
  ACHECK(one_shot_fuser_->Init(init_option_));

  // 序列分类融合
  sequence_fuser_ = BaseSequenceTypeFusionRegisterer::GetInstanceByName(
      sequence_fusion_method_);
  ACHECK(sequence_fuser_->Init(init_option_));  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • Classify函数

TODO:维特比算法,虽然能看懂,但是目前不知道它的计算数据是啥,没弄清楚到底在算啥,后续有缘更新

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家自动化/article/detail/414055
推荐阅读
相关标签
  

闽ICP备14008679号