V
主页
[CVPR 23] 事件像机的数据驱动特征跟踪
发布人
Because of their high temporal resolution, increased resilience to motion blur, and very sparse output, event cameras have been shown to be ideal for low-latency and low-bandwidth feature tracking, even in challenging scenarios. Existing feature tracking methods for event cameras are either handcrafted or derived from first principles but require extensive parameter tuning, are sensitive to noise, and do not generalize to different scenarios due to unmodeled effects. To tackle these deficiencies, we introduce the first data-driven feature tracker for event cameras, which leverages low-latency events to track features detected in a grayscale frame. We achieve robust performance via a novel frame attention module, which shares information across feature tracks. By directly transferring zero-shot from synthetic to real data, our data-driven tracker outperforms existing approaches in relative feature age by up to 120% while also achieving the lowest latency. This performance gap is further increased to 130% by adapting our tracker to real data with a novel self-supervision strategy. Reference: Nico Messikommer*, Carter Fang*, Mathias Gehrig, Davide Scaramuzza Data-driven Feature Tracking for Event Cameras IEEE Conference of Computer Vision and Pattern Recognition (CVPR), 2023, Vancouver , CA PDF: https://rpg.ifi.uzh.ch/docs/CVPR23_Messikommer.pdf Code: https://github.com/uzh-rpg/deep_ev_tracker
打开封面
下载高清视频
观看高清视频
视频下载器
[Science Robotics 23] Reaching the Limit in Autonomous Racing
[Nature 23] 基于深度强化学习的冠军级别的无人机竞速
MPC + 强化学习![ICRA 24] Actor Critic Model Predictive Control
[CVPR 21] TimeLens: Event-based Video Frame Interpolation
运龙的毕业答辩 - Learning Robot Control From RL to Differential Simulation
[RSS 23] HDVIO: 利用混合动力学改进定位和扰动估计
CVPR'24 | 视觉基础模型大一统?融合CLIP、DINOv2、SAM等,实现分类分割等任务上的SOTA性能
人类水平性能的自主视觉无人机 - Davide Scaramuzza
[Science Robotics 21] Learning High-Speed Flight in the Wild
[ECCV 24] 强化学习 + 视觉里程记!Reinforcement Learning Meets Visual Odometry
对比学习+视觉竞速![ICRA 24] Contrastive Learning in Vision-based Agile Flight
谁说跑VSLAM就得小心翼翼?我们的VIO鲁棒性爆表,放心用,还有200 Hz的高帧率定位输出
[RSS 21] NeuroBEM: 基于神经网络的混合气动四旋翼模型
(CVPR 2024)即插即用高效上采样卷积块EUCB,涨点起飞起飞了
[CoRL 24, Oral] Learning Quadruped Locomotion Using Differentiable Simulation
[RA-L 21 Best Paper] 利用机载视觉传感器在旋翼故障情况下实现自主四旋翼飞行
[TRO 21] Model Predictive Contouring Control for Time-Optimal Quadrotor Flight
老师不教我来教!OpenCV与YOLO的结合使用:目标实时追踪 计算机博士带你做毕设!
[RSS最佳Demo论文] Demonstrating Agile Flight from Pixels without State Estimation
完爆YOLOv11!Transformer+目标检测新算法性能无敌,狠狠拿捏CV顶会
吃透多模态四大模型!计算机大佬带你一口气学会:CLIP BLIP VIT MLLM多模态底层逻辑!真的通俗易懂!带你真正的对话机器人!(人工智能、深度学习)
【yolov8】一小时掌握!从0开始搭建部署YOLOv8,环境安装+推理+自定义数据集搭建与训练,入门到精通!
问卷数据实操!问卷星下载数据如何处理才能导入spss分析? #问卷 #问卷调查 #spss #实证分析 #论文写作
上百万热度!CCF A类与SCI一区那个比较难?科研新手怎么找到创新点?研究生/CVPR/毕业论文
即插即用特征融合模块AFPN,涨点起飞
研究生最好的状态,研一发论文,研二找实习
究极全面!CVPR2024可复现论文合集,原文/代码/演示全都有!(深度学习/计算机视觉)
完整200讲!北大博士系统讲解【OpenCV】入门到进阶,包含图像识别、图像分割、目标检测等多个核心项目实战!
yolov8多目标跟踪实战:opencv读取视频帧、画图、写入文件、yolov8实时视频跟踪、轨迹绘制_计算机视觉
今天把我做的这个机械臂开源
李飞飞:斯坦福计算机视觉公开课
爆肝整理!CVPR2024可复现论文合集,原文/代码/演示全都有!(深度学习/计算机视觉)
【YOLOv11】一小时速通版!知名博士逐一解读配置文件以及代码复现,环境安装+推理+自定义数据集搭建与训练,入门到精通!
ECCV 2024顶会含二次创新 | 融合空间域和频域特征 | YOLOv8v10v11创新改进| FFCM傅里叶卷积混合即插即用模块,经典话术编故事,CV通用
YOLO11V2.1增量更新,新增目标追踪运动轨迹,优化推理线程,捕获进程窗口,解压覆盖原文件即可完成更新
这可能是B站最全面的【3D点云+三维重建】教程!原理解读+实战分析,迪哥带你一口气学完!计算机视觉
比刷剧还爽!【OpenCV+YOLO】终于有人能把OpenCV图像处理+YOLO目标检测讲的这么通俗易懂了!J建议收藏!(人工智能、深度学习、机器学习算法)
第89集 | 使用 Ultralytics YOLO11 进行目标检测与跟踪 | 如何进行基准测试 | YOLO11 发布 🚀
清华团队首次发现具身智能Scaling Laws:从 ChatGPT 到机器人的制胜法则
YOLO卷不动了,来试试DETR!目标检测:Transformer跨界CV做检测的开山之作—DETR目标检测算法原理详解+源码复现教程!(深度学习/计算机视觉)