赞
踩
【ECCV2022】|4D-STOP:基于时空对象方案生成和聚合的4D LiDAR全景分割|4D-StOP: Panoptic Segmentation of 4D LiDAR Using Spatio-Temporal Object Proposal Generation and Aggregation|论文链接|代码链接
【ECCV2022】|全景部件分割新SOTA | Panoptic-PartFormer: Learning a Unified Model for Panoptic Part Segmentation|论文链接|代码链接
【ECCV2022】全景分割新SOTA | k-means Mask Transformer |论文链接|代码链接
【ECCV2022】图像类别排名的自适应分割 | RankSeg: Adaptive Pixel Classification with Image Category Ranking for Segmentation|论文链接|代码链接
【ICCV2023】|使用嵌入调制的开放词汇泛全景分割|Open-vocabulary Panoptic Segmentation with Embedding Modulation|论文链接
【CVPR2022】|基于差异注意力的全景分割联合预测|Joint Forecasting of Panoptic Segmentations with Difference Attention|论文链接
【CVPR2022】|全景、实例和语义关系:增强全景分割的关系上下文编码器|Panoptic, Instance and Semantic Relations: A Relational Context Encoder to Enhance Panoptic Segmentation|论文链接
【CVPR2022】|目标检测分割–自监督|Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data|论文链接|代码链接
【CVPR2022】|激光雷达数据-全景分割| Proposal-free Lidar Panoptic Segmentation with Pillar-level Affinity|论文链接
【CVPR2023】|Mask2Former全能图像分割|MP-Former: Mask-Piloted Transformer for Image Segmentation|论文链接|代码链接
【ECCV2022】|遮挡预测|自动驾驶非模态和可见语义分割的联合预测|Joint Prediction of Amodal and Visible Semantic Segmentation for Automated Driving|
【ECCV2022】|无监督域适应| Prototypical Contrast Adaptation for Domain Adaptive Semantic Segmentation|论文链接|代码链接
【ECCV2022】2D先验辅助的激光雷达点云语义分割 | 2DPASS: 2D Priors Assisted Semantic Segmentation on LiDAR Point Clouds | 论文链接|代码链接
【ECCV2022】城市驾驶场景下的异常分割 | Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Complex Urban Driving Scenes |论文链接|代码链接
【ECCV2022|点云语义分割|Open-world Semantic Segmentation for LIDAR Point Clouds|论文链接|代码链接
【ICCV2023】|重新思考激光雷达分割的距离视图表示方式|Rethinking Range View Representation for LiDAR Segmentation|论文链接
【ICCV2023】|UniSeg:多模态分割网络|UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase|论文链接|代码链接
【ICCV2023】|FreeCOS:基于分形和无标签图像的自监督学习,用于曲线对象分割|FreeCOS: Self-Supervised Learning from Fractals and Unlabeled Images for Curvilinear Object Segmentation|论文链接|代码链接
【ICCV2023】|分割一切|Segment Anything|论文链接|代码链接
【ICCV2023】|MARS: 无需额外监督的模型不可知偏置对象移除,用于弱监督语义分割|MARS: Model-agnostic Biased Object Removal without Additional Supervision for Weakly-Supervised Semantic Segmentation|论文链接|代码链接
【CVPR2022】|NightLab:具有硬度检测功能的双层架构,可在夜间进行分割|NightLab: A Dual-level Architecture with Hardness Detection for Segmentation at Night|论文链接|代码链接
【CVPR2022】|固定记忆:学习泛化语义分割|Pin the Memory: Learning to Generalize Semantic Segmentation|论文链接
【CVPR2022】|用于适应全景语义分割的失真感知转换器|Bending Reality: Distortion-aware Transformers for Adapting to Panoramic Semantic Segmentation|论文链接|代码链接
【CVPR2022】|基于自监督图像重建解码器的语义分割性能预测|Performance Prediction for Semantic Segmentation by a Self-Supervised Image Reconstruction Decoder|论文链接
【CVPR2023】|任意模态语义分割|Delivering Arbitrary-Modal Semantic Segmentation|论文链接
【CVPR2023】|弱监督语义分割的标记对比|Token Contrast for Weakly-Supervised Semantic Segmentation|论文链接|代码链接
【CVPR2023】|基础模型推动语义分割的弱增量学习|Foundation Model Drives Weakly Incremental Learning for Semantic Segmentation|论文链接
【CVPR2023】|MSeg3D:面向自动驾驶的多模态 3D 语义分割|MSeg3D: Multi-modal 3D Semantic Segmentation for Autonomous Driving|论文链接|代码链接
【CVPR2023】|通过改变压缩视频的分辨率进行有效的语义分割|Efficient Semantic Segmentation by Altering Resolutions for Compressed Videos|论文链接|代码链接
【CVPR2023】|LaserMix用于半监督LiDAR语义分割|LaserMix for Semi-Supervised LiDAR Semantic Segmentation|论文链接|代码链接
【CVPR2023】|基于生成方法构建的语义分割模型|Generative Semantic Segmentation|论文链接|代码链接
【CVPR2023】|基于冲突的半监督寓意跨视图一致性研究分割|Conflict-Based Cross-View Consistency for Semi-Supervised Semantic Segmentation|论文链接|代码链接
【CVPR2023】|利用 2D 和 3D 网络的互补性来解决3D 语义分割中的域转移|Exploiting the Complementarity of 2D and 3D Networks to Address Domain-Shift in 3D Semantic Segmentation|论文链接|代码链接
【CVPR2023】|域自适应语义分割|DiGA: Distil to Generalize and then Adapt for Domain Adaptive Semantic Segmentation|论文链接|代码链接
【CVPR2023】|3D 语义分割:学习不利条件点云的通用模型|3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds|论文链接|代码链接
【ECCV2022】|OSFormer:单阶段伪装实例分割 Transformers|OSFormer: One-Stage Camouflaged Instance Segmentation with Transformers|论文链接|代码链接|中文论文链接
【ECCV2022】|基于水平集的弱监督实例分割 | Box-supervised Instance Segmentation with Level Set Evolution |论文链接|代码链接
【ICCV2023】|DVIS:解耦式视频实例分割框架|DVIS: Decoupled Video Instance Segmentation Framework|论文链接|代码链接
【CVPR2022】|E2EC:一种基于轮廓的端到端方法,用于高质量高速实例分割|E2EC: An End-to-End Contour-based Method for High-Quality High-Speed Instance Segmentation|论文链接|代码链接
【CVPR2023】|ISBNet:3D 点云实例分割网络具有实例感知采样和盒感知动态卷积|ISBNet: a 3D Point Cloud Instance Segmentation Network with Instance-aware Sampling and Box-aware Dynamic Convolution|论文链接|代码链接
【CVPR2023】|FastInst:一种简单的基于查询的实时实例分割模型|FastInst: A Simple Query-Based Model for Real-Time Instance Segmentation|论文链接|代码链接
【CVPR2023】|DynaMask:用于实例分割的动态掩码选择|DynaMask: Dynamic Mask Selection for Instance Segmentation|论文链接|代码链接
【ECCV2022】|域适应视频分割 | Domain Adaptive Video Segmentation via Temporal Pseudo Supervision|论文链接|代码链接
【ECCV2022】|XMem: 基于Atkinson-Shiffrin记忆模型的长时视频对象分割 | XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model|论文链接|代码链接
【ECCV2022】|用于视频对象分割的学习质量感知动态记忆|Learning Quality-aware Dynamic Memory for Video Object Segmentation |论文链接|代码链接
【ECCV2023】|InstMove:用于以物体为中心的视频分割|InstMove: Instance Motion for Object-centric Video Segmentation|论文链接
【ECCV2023】|半监督视频对象分割|MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation|论文链接
【ECCV2022】|全卷积anchor-free方案3D目标检测 | FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection|论文链接|代码链接
【ECCV2022】|PETRv2:多摄像头图像 3D 感知的统一框架|PETR: Position Embedding Transformation for Multi-View 3D Object Detection |论文链接|代码链接
【ECCV2022】|动态快读的多模态3D目标检测框架 | AutoAlignV2: Deformable Feature Aggregation for Dynamic Multi-Modal 3D Object Detection|论文链接|代码链接
【ECCV2022】|单目3D目标检测器 | DEVIANT: Depth EquiVarIAnt NeTwork for Monocular 3D Object Detection |论文链接|代码链接
【ECCV2022】|面向3D目标检测的同质多模态特征融合与交互|Homogeneous Multi-modal Feature Fusion and Interaction for 3D Object Detection |论文链接|代码链接
【ECCV2022】|用于单目3D目标检测的密集约束深度估计器|Densely Constrained Depth Estimator for Monocular 3D Object Detection |论文链接|代码链接
【ECCV2022】|3D对象检测|基于能量优化的三维目标探测器真实性验证|Plausibility Verification for 3D Object Detectors Using Energy-Based Optimization|论文链接
【ECCV2022】|3D多目标追踪|SimpleTrack: Understanding and Rethinking 3D Multi-object Tracking|论文链接
【ECCV2022】|基于IoU的单阶段3D目标检测优化|Rethinking IoU-based Optimization for Single-stage 3D Object Detection|arxiv.org/pdf/2207.09332.pdf|代码链接
【ICCV2023】| 将多模态稀疏表示融合用于多传感器3D物体检测|SparseFusion: Fusing Multi-Modal Sparse Representations for Multi-Sensor 3D Object Detection|论文链接|论文笔记
【ICCV2023】|Ada3D:利用自适应推理来挖掘空间冗余,实现高效的3D物体检测|Ada3D : Exploiting the Spatial Redundancy with Adaptive Inference for Efficient 3D Object Detection|论文链接
【ICCV2023】|PETRv2:基于多摄像头图像的三维感知统一框架|PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images|论文链接
【ICCV2023】|跨模态Transformer:实现快速和稳健的三维物体检测|Cross Modal Transformer: Towards Fast and Robust 3D Object Detection|论文链接
【ICCV2023】|DQS3D:密集匹配的量化感知半监督三维检测|DQS3D: Densely-matched Quantization-aware Semi-supervised 3D Detection|论文链接
【ICCV2023】|StreamPETR:探索面向物体的时间建模,用于高效的多视角三维物体检测|StreamPETR: Exploring Object-Centric Temporal Modeling for Efficient Multi-View 3D Object Detection|论文链接|代码链接
【CVPR2022】|3d目标检测–点云|用于 3D 目标检测的焦点稀疏卷积网络|Focal Sparse Convolutional Networks for 3D Object Detection|论文链接|代码链接
【CVPR2022】|3d目标检测–点云|OccAM的激光:基于遮挡的LiDAR数据3D对象检测器的归因图| OccAM’s Laser: Occlusion-based Attribution Maps for 3D Object Detectors on LiDAR Data|论文链接|代码链接
【CVPR2022】|3d目标检测-图像|从单视角观察中学习3D物体辐射场| AutoRF: Learning 3D Object Radiance Fields from Single View Observations |论文链接|代码链接
【CVPR2022】|粗标注目标检测|在嘈杂注释下实现鲁棒的自适应目标检测|Towards Robust Adaptive Object Detection under Noisy Annotations|论文链接
【CVPR2022】|目标检测–毫米波雷达|利用雷达感知的时间关系实现自动驾驶| Exploiting Temporal Relations on Radar Perception for Autonomous Driving|论文链接
【CVPR2022】|3d目标检测-图像|单目3D目标检测的单向丢失||Homography Loss for Monocular 3D Object Detection|论文链接
【CVPR2022】|3d目标检测–图像+点云|CAT-Det:用于多模态 3D 目标检测的对比增强型转换器 |CAT-Det: Contrastively Augmented Transformer for Multi-modal 3D Object Detection|论文链接
【CVPR2022】| 3D目标检测-图像| MonoDETR:用于单目 3D 目标检测的深度感知转换器|MonoDETR: Depth-aware Transformer for Monocular 3D Object Detection|论文链接|代码链接
【CVPR2022】| 3D目标检测-图像+点云|通过深度完成实现高质量 3D 检测| Sparse Fuse Dense: Towards High Quality 3D Detection with Depth Completion|论文链接
【CVPR2022】| 3D目标检测-图像|MonoDTR:采用深度感知转换器的单目 3D 目标检测| MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer|论文链接|代码链接
【CVPR2022】| 3D目标检测-图像| MonoJSG:联合语义和几何代价函数的单目3D目标检测|MonoJSG: Joint Semantic and Geometric Cost Volume for Monocular 3D Object Detection|论文链接|[代码链接](GitHub - lianqing11/MonoJSG: Code for MonoJSG: Joint Semantic and Geometric Cost Volume for Monocular 3D Object Detection)
【CVPR2022】| 目标检测-点云 |用于 LiDAR 3D 目标检测的点密度感知体素|Point Density-Aware Voxels for LiDAR 3D Object Detection|论文链接|代码链接
【CVPR2022】| 交通标志识别|Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon|论文链接
【CVPR2022】| 点云目标检测|基于查询的统一点云理解范式| A Unified Query-based Paradigm for Point Cloud Understanding|论文链接
【CVPR2022】| 目标检测-点云|基于LiDAR的3D目标检测的多功能多视图框架|A Versatile Multi-View Framework for LiDAR-based 3D Object Detection with Guidance from Panoptic Segmentation|论文链接
【CVPR2022】| 3d目标检测-图像 |采用稀疏变换的单步 3D 目标检测器|Pseudo-Stereo for Monocular 3D Object Detection in Autonomous Driving.|论文链接|代码链接
【CVPR2022】|目标检测-点云|采用稀疏变换的单步 3D 目标检测器|Embracing Single Stride 3D Object Detector with Sparse Transforme|论文链接|代码链接
【CVPR2022】|目标检测-半监督|PseudoProp:用于自动驾驶系统中半监督目标检测的鲁棒伪标签生成|PseudoProp: Robust Pseudo-Label Generation for Semi-Supervised Object Detection in Autonomous Driving Systems|论文链接
【CVPR2023】|在鸟瞰图中实现多视图 3D 目标检测的域泛化|Towards Domain Generalization for Multi-view 3D Object Detection in Bird-Eye-View|论文链接
【CVPR2023】|MSMDFusion: 激光雷达-相机融合的3D多模态检测|MSMDFusion: Fusing LiDAR and Camera at Multiple Scales with Multi-Depth Seeds for 3D Object Detection|论文链接
【CVPR2023】|弱监督-3D目标检测|Weakly Supervised Monocular 3D Object Detection using Multi-View Projection and Direction Consistency|论文链接
【CVPR2023】|Uni3D: 首个多数据集3D目标检测框架|Uni3D: A Unified Baseline for Multi-dataset 3D Object Detection|论文链接|代码链接
【CVPR2023】|用于多模态 3D 目标检测的虚拟稀疏卷积|Virtual Sparse Convolution for Multimodal 3D Object Detection|论文链接|代码链接
【CVPR2023】|X3KD:跨模态、任务和阶段的知识提炼,用于多相机 3D 目标检测|X3KD: Knowledge Distillation Across Modalities, Tasks and Stages for Multi-Camera 3D Object Detection|论文链接|成果展示视频
【CVPR2023】|具有可学习的以对象为中心的全局优化的 3D 视频对象检测|3D Video Object Detection with Learnable Object-Centric Global Optimization|论文链接|代码链接
【CVPR2023】|CAPE:用于多视图 3D 目标检测的相机视图位置嵌入|CAPE: Camera View Position Embedding for Multi-View 3D Object Detection|论文链接
【CVPR2023】|Bi3D:用于跨域 3D 目标检测的双域主动学习|Bi3D: Bi-domain Active Learning for Cross-domain 3D Object Detection|论文链接|代码链接
【CVPR2023】|AeDet:方位角不变多视图 3D 目标检测|AeDet: Azimuth-invariant Multi-view 3D Object Detection|论文链接|代码链接
【CVPR2023】|用于 3D 半监督目标检测的分层监督和随机数据增强|Hierarchical Supervision and Shuffle Data Augmentation for 3D Semi-Supervised Object Detection|论文链接|代码链接
【CVPR2023】|LinK:用于基于 LiDAR 的 3D 感知的线性内核|LinK: Linear Kernel for LiDAR-based 3D Perception|论文链接|代码链接
【CVPR2023】|CAPE:用于多视图 3D 目标检测的相机视图位置嵌入|CAPE: Camera View Position Embedding for Multi-View 3D Object Detection|论文链接
【CVPR2023】|PiMAE:用于3D目标检测的点云和图像交互式自动编码器|PiMAE: Point Cloud and Image Interactive Masked Autoencoders for 3D Object Detection|论文链接|代码链接
【CVPR2023】|LoGoNet:局部到全局交叉模态融合实现3D目标检测|LoGoNet: Towards Accurate 3D Object Detection with Local-to-Global Cross-Modal Fusion|论文链接|代码链接
【CVPR2023】|半监督对象检测|MixTeacher: Mining Promising Labels with Mixed Scale Teacher for Semi-Supervised Object Detection|论文链接|代码链接
【CVPR2023】| Lite DETR:一种用于高效DETR的交错多尺度编码器|Lite DETR : An Interleaved Multi-Scale Encoder for Efficient DETR|论文链接|代码链接
【CVPR2023】|检测开放世界中的一切:迈向通用物体检测|Detecting Everything in the Open World: Towards Universal Object Detection|论文链接
【CVPR2023】|端到端对象检测|Dense Distinct Query for End-to-End Object Detection|论文链接|代码链接
【ECCV2022】|StretchBEV:在空间和时间上扩展未来实例预测 |StretchBEV: Stretching Future Instance Prediction Spatially and Temporally|论文链接|代码链接
【ECCV2022】|BEV感知|BEVFormer: Learning Bird’s-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers|论文链接|代码链接
【ECCV2022】|BEV感知-目标检测|Learning Ego 3D Representation as Ray Tracing|论文链接|代码链接
【ICCV2023】|SurroundOcc:用于自动驾驶的多摄像头三维占据预测|SurroundOcc: Multi-Camera 3D Occupancy Prediction for Autonomous Driving|论文链接|代码链接
【ICCV2023】|将场景视为占据情况|Scene as Occupancy|论文链接|代码链接
【ICCV2023】|MetaBEV:解决BEV检测和地图分割的传感器故障问题|MetaBEV: Solving Sensor Failures for BEV Detection and Map Segmentation|论文链接|代码链接
【CVPR2022】|BEV感知地图+障碍物|Cross-view Transformers for real-time Map-view Semantic Segmentation|论文链接|代码链接
【CVPR2022】|BEV障碍物投影|“The Pedestrian next to the Lamppost” Adaptive Object Graphs for Better Instantaneous Mapping|论文链接
【CVPR2022】|BEV感知车道线+障碍物|Scene Representation in Bird’s-Eye View from Surrounding Cameras with Transformers|论文链接
【CVPR2022】|BEV尺度下-基于web数据学习驾驶策略||SelfD: Self-Learning Large-Scale Driving Policies From the Web|论文链接
【CVPR2023】|分析BEV下3D目标检测的鲁棒性|Understanding the Robustness of 3D Object Detection with Bird’s-Eye-View Representations in Autonomous Driving|论文链接|代码链接
【CVPR2023】|TBP-Former:纯视觉时序BEV金字塔的联合感知与预测新方案!|TBP-Former: Learning Temporal Bird’s-Eye-View Pyramid for Joint Perception and Prediction in Vision-Centric Autonomous Driving|论文链接|代码链接
【ECCV2022】|垂直视图:通过傅里叶光谱进行轨迹预测的分层网络|View Vertically: A Hierarchical Network for Trajectory Prediction via Fourier Spectrums|论文链接|代码链接
【ECCV2022】|使用本地行为数据进行轨迹预测 |Aware of the History: Trajectory Forecasting with the Local Behavior Data|论文链接|代码链接
【ECCV2022】|D2-TPred: 交通信号灯下轨迹预测的不连续依赖关系 |D2-TPred: Discontinuous Dependency for Trajectory Prediction under Traffic Lights |论文链接|代码链接
【ECCV2022】|AdvDO:用于轨迹预测的逼真对抗性攻击 |AdvDO: Realistic Adversarial Attacks for Trajectory Prediction|论文链接
【ECCV2022】|车辆意图预测–变道|DLane Change Classification and Prediction with Action Recognition Networks|论文链接|代码链接
【ECCV2022】|行为决策|InAction: Interpretable Action Decision Making for Autonomous Driving|论文链接|代码链接
【ECCV2022】|轨迹预测|Action-based Contrastive Learning for Trajectory Prediction|论文链接
【ECCV2022】|障碍物轨迹预测|View Vertically: A Hierarchical Network for Trajectory Prediction via Fourier Spectrums||论文链接
【ECCV2022】|行人轨迹预测|MCIP:用于行人意图预测的多流网络。|MCIP: Multi-Stream Network for Pedestrian Crossing Intention Prediction|论文链接|代码链接
【ICCV2023】|EigenTrajectory:用于多模态轨迹预测的低秩描述符|EigenTrajectory: Low-Rank Descriptors for Multi-Modal Trajectory Forecasting|论文链接|代码链接
【CVPR2022】|轨迹预测–Transformer|HiVT:用于多智能体运动预测的分层矢量转换器|HiVT: Hierarchical Vector Transformer for Multi-Agent Motion Prediction|论文链接|代码链接
【CVPR2022】|PointMotionNet:大规模 LiDAR 点云序列的逐点运动学习|PointMotionNet: Point-Wise Motion Learning for Large-Scale LiDAR Point Clouds Sequences|论文链接
【CVPR2022】|用于轨迹预测的目标驱动自注意力循环网络|Goal-driven Self-Attentive Recurrent Networks for Trajectory Prediction|论文链接|代码链接
【CVPR2022】|自动驾驶的智体重要性预测|Importance is in your attention: agent importance prediction for autonomous driving|论文链接
【CVPR2022】|基于占用网格图的端到端轨迹分布预测|End-to-End Trajectory Distribution Prediction Based on Occupancy Grid Maps|论文链接|代码链接
【CVPR2022】|端到端实现目标检测-轨迹预测(点云)|Forecasting from LiDAR via Future Object Detection|论文链接|代码链接
【CVPR2022】|轨迹预测-行人|通过可转移GNN进行自适应轨迹预测| Adaptive Trajectory Prediction via Transferable GNN|论文链接
【CVPR2022】|轨迹预测-利用上下文信息|提高运动预测中的情境感知|Raising context awareness in motion forecasting|论文链接|代码链接
【CVPR2023】|IPCC-TP:利用增量皮尔逊相关系数用于联合多智能体轨迹预测|IPCC-TP: Utilizing Incremental Pearson Correlation Coefficient for Joint Multi-Agent Trajectory Prediction|论文链接
【ECCV2022|3D车道线检测新基线 | PersFormer: a New Baseline for 3D Laneline Detection |论文链接|代码链接
【ECCV2022】|SuperLine3D:基于自监督的激光雷达点云线分割与描述子计算方法|SuperLine3D: Self-supervised Line Segmentation and Description for LiDAR Point Cloud |论文链接|代码链接
【ECCV2022】|基于动作识别网络的车道变化分类与预测|Lane Change Classification and Prediction with Action Recognition Networks|论文链接
【ECCV2022】|RCLane:用于车道检测的中继链预测|RCLane: Relay Chain Prediction for Lane Detection|论文链接
【CVPR2022】|3D车道线单目检测方法|ONCE-3DLanes: Building Monocular 3D Lane Detection|论文链接|代码链接
【CVPR2022】|车道检测模型的面向驾驶的指标| Towards Driving-Oriented Metric for Lane Detection Models|论文链接|代码链接
【CVPR2022】|通过曲线建模重新思考高效的车道检测| Rethinking Efficient Lane Detection via Curve Modeling|论文链接|代码链接
【CVPR2022】|俯视图车道线检测|俯视图重建:一种基于几何结构先验的三维车道检测方法|Reconstruct from Top View: A 3D Lane Detection Approach based on Geometry Structure Prior|论文链接
【CVPR2022】|多数据集融合-车道线检测|用于通道检测的多级域适配|Multi-level Domain Adaptation for Lane Detection|论文链接
【CVPR2023】|单目场景下在BEV视图下实现车道线检测|BEV-LaneDet: a Simple and Effective 3D Lane Detection Baseline|论文链接|代码链接
【ECCV2022】|TRoVE:将道路场景数据集转换为真实感虚拟环境|TRoVE: Transforming Road Scene Datasets into Photorealistic Virtual Environments|论文链接|代码链接
【ECCV2022】|端到端自动驾驶-BEV架构|ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning|论文链接|代码链接
【ECCV2022】|端到端感知预测–无监督 |Motion Inspired Unsupervised Perception and Prediction in Autonomous Driving|论文链接
【ICCV2023】|VAD:高效自动驾驶的矢量场景表示|VAD: Vectorized Scene Representation for Efficient Autonomous Driving|论文链接|代码链接
【CVPR2022】|关于端到端驾驶模型的高效训练和验证的数据选择|On the Choice of Data for Efficient Training and Validation of End-to-End Driving Models|论文链接
【CVPR2022】|RNN|Learning from All Vehicles|论文链接|代码链接
【CVPR2023】|具备全栈关键任务的端到端自动驾驶模型|Planning-oriented Autonomous Driving|论文链接|代码链接
【ECCV2022】|实现对象跟踪的大统一| Towards Grand Unification of Object Tracking |论文链接|代码链接
【ECCV2022】|MOTR:使用 Transformer 进行端到端多对象跟踪|MOTR: End-to-End Multiple-Object Tracking with TRansformer|论文链接|代码链接
【ECCV2022】|AiATrack:Transformer视觉跟踪的注意力中的注意力 | AiATrack: Attention in Attention for Transformer Visual Tracking |论文链接|代码链接
【ECCV2022】|PolarMOT:几何关系在3D多目标跟踪中能走多远?|PolarMOT: How Far Can Geometric Relations Take Us in 3D Multi-Object Tracking?|论文链接|代码链接
【ECCV2022】|FEAR:快速、高效、准确和稳健的视觉跟踪器|FEAR: Fast, Efficient, Accurate and Robust Visual Tracker|论文链接|代码链接
【ECCV2022】|单目标追踪|Towards Sequence-Level Training for Visual Tracking|论文链接|代码链接
https://github.com/LiWentomng/boxlevelset)
【ICCV2023】|PVT++:通用的端对端预测性目标跟踪框架|PVT++: A Simple End-to-End Latency-Aware Visual Tracking Framework|论文链接|代码链接
【ICCV2023】|ReST:一种用于多相机多目标跟踪的可重构时空图模型|ReST: A Reconfigurable Spatial-Temporal Graph Model for Multi-Camera Multi-Object Tracking|论文链接|代码链接
【ICCV2023】|3DMOTFormer:用于在线 3D 多目标跟踪的图形转换器|3DMOTFormer: Graph Transformer for Online 3D Multi-Object Tracking|论文链接|代码链接
【ICCV2023】|MBPTrack:使用内存网络和 Box Priors 改进 3D 点云跟踪|单目标跟踪|MBPTrack: Improving 3D Point Cloud Tracking with Memory Networks and Box Priors|论文链接|代码链接
【CVPR2022】|多目标跟踪|MUTR3D: A Multi-camera Tracking Framework via 3D-to-2D Queries|论文链接
【CVPR2022】||Time3D: End-to-End Joint Monocular 3D Object Detection and Tracking for Autonomous Driving|论文链接
【CVPR2022】|行人重识别| Cloning Outfits from Real-World Images to 3D Characters for Generalizable Person Re-Identification|论文链接|代码链接
【CVPR2022】|多目标跟踪|MUTR3D:通过3D到2D查询的多相机跟踪框架|MeMOT: Multi-Object Tracking with Memory|论文链接
【CVPR2022】|用于对象跟踪的统一 Transformer 跟踪器|Unified Transformer Tracker for Object Tracking|论文链接
【CVPR2022】|单目标跟踪|点云中 3D 单目标跟踪的以运动为中心的范式|Beyond 3D Siamese Tracking: A Motion-Centric Paradigm for 3D Single Object Tracking in Point Clouds|论文链接|代码链接
【CVPR2022】|TripletTrack:使用三重嵌入和 LSTM 进行 3D 对象跟踪|TripletTrack: 3D Object Tracking using Triplet Embeddings and LSTM|论文链接
【CVPR2023】|参考语言特征的多目标跟踪|Referring Multi-Object Tracking|论文链接|代码链接
【CVPR2023】|可见模态目标跟踪|Visual Prompt Multi-Modal Tracking|论文链接|代码链接
【CVPR2023】|长时多目标跟踪|MotionTrack: Learning Robust Short-term and Long-term Motions for Multi-Object Tracking|论文链接
【ICCV2023】|通过共识实现对抗性稳健的协作感知|Among Us: Adversarially Robust Collaborative Perception by Consensus|论文链接|代码链接
【ICCV2023】|HM-ViT:基于视觉转换器的异模态车对车协同感知|HM-ViT: Hetero-modal Vehicle-to-Vehicle Cooperative perception with vision transformer|论文链接
【ECCV2022】|协同视觉感知|复杂交通环境下自动驾驶的人车协同视觉感知|Human-Vehicle Cooperative Visual Perception for Autonomous Driving Under Complex Traffic Environments|论文链接
【ECCV2022】|盲点检测| BlindSpotNet: Seeing Where We Cannot See|论文链接
【ECCV2022】|基于关键点和雷达流融合的手势识别|Gesture Recognition with Keypoint and Radar Stream Fusion for Automated Vehicles|代码链接
【ECCV2022】|目标检测|基于YOLOv5s的改进轻量级网络在自动驾驶目标检测中的应用|An Improved Lightweight Network Based on YOLOv5s for Object Detection in Autonomous Driving|论文链接
【ECCV2022】|遮挡预测|自动驾驶非模态和可见语义分割的联合预测|Joint Prediction of Amodal and Visible Semantic Segmentation for Automated Driving|
【ECCV2022】|注意力网络|Learning 3D Semantics From Pose-Noisy 2D Images with Hierarchical Full Attention Network|论文链接|代码链接
【ICCV2023】||||代码链接
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。