标跟踪是指:给出目标在跟踪视频第一帧中的初始状态(如位置,尺寸),自动估计目标物体在后续帧中的状态。 目标跟踪分为单目标跟踪和多目标跟踪。 人眼可以比较轻松的在一段时间内跟住某个特定目标。但是对机器而言,这一任务并不简单,尤其是跟踪过程中会出现目标发生剧烈形变、被其他目标遮挡或出现相似物体干扰等等各种复杂的情况。过去几十年以来,目标跟踪的研究取得了长足的发展,尤其是各种机器学习算法被引入以来,目标跟踪算法呈现百花齐放的态势。2013年以来,深度学习方法开始在目标跟踪领域展露头脚,并逐渐在性能上超越传统方法,取得巨大的突破。

知识荟萃

目标跟踪 (Object Tracking/Visual Tracking) 专知荟萃

入门学习

  1.  运动目标跟踪系列(1-17)

  2. 目标跟踪学习笔记(2-4)

  3. 目标跟踪算法之深度学习方法

  4. 基于深度学习的多目标跟踪算法研究

  5. 从传统方法到深度学习,目标跟踪方法的发展概述

  6. 目标跟踪算法 Visual Tracking Algorithm Introduction.

  7. Online Object Tracking: A Benchmark 论文笔记 和 翻译 - [http://blog.csdn.net/shanglianlm/article/details/47376323], [http://blog.csdn.net/roamer_nuptgczx/article/details/51379191]

  8. 计算机视觉中,目前有哪些经典的目标跟踪算法?

进阶文章

NIPS2013

  • DLT: Naiyan Wang and Dit-Yan Yeung. "Learning A Deep Compact Image Representation for Visual Tracking." NIPS (2013).

CVPR2014

ECCV2014

BMVC2014

ICML2015

CVPR2015

ICCV2015

NIPS2016

  • Learnet: Luca Bertinetto, João F. Henriques, Jack Valmadre, Philip H. S. Torr, Andrea Vedaldi. "Learning feed-forward one-shot learners." NIPS (2016).

CVPR2016

ECCV2016

CVPR2017

ICCV2017

PAMI & IJCV & TIP

ArXiv

Benchmark

综述

  1. Visual Tracking: An Experimental Survey. PAMI2014.
    - [http://ieeexplore.ieee.org/document/6671560/], [https://dl.acm.org/citation.cfm?id=2693387]
    - 代码:[http://alov300pp.joomlafree.it/trackers-resource.html]

  2. Online Object Tracking: A Benchmark CVPR2013: Wu Y, Lim J, Yang M H.
    - 网址和代码:[http://cvlab.hanyang.ac.kr/tracker_benchmark/benchmark_v10.html]

  3. A survey of datasets for visual tracking
    - [https://link.springer.com/article/10.1007/s00138-015-0713-y]

  4. Siamese Learning Visual Tracking: A Survey

  5. A survey on multiple object tracking algorithm

Tutorial

  1. Object Tracking
  2. Stanford cs231b Lecture 5: Visual Tracking by Alexandre Alahi Stanford Vision Lab

代码

  1. Hierarchical Convolutional Features for Visual Tracking
  2. Robust Visual Tracking via Convolutional Networks
  3. Learning Multi-Domain Convolutional Neural Networks for Visual Tracking
  4. Understanding and Diagnosing Visual Tracking Systems
  5. Visual Tracking with Fully Convolutional Networks
  6. Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks
  7. Learning to Track at 100 FPS with Deep Regression Networks
  8. Fully-Convolutional Siamese Networks for Object Tracking
  9. Spatially Supervised Recurrent Convolutional Neural Networks for Visual Object Tracking
  10. Unsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network
  11. ECO: Efficient Convolution Operators for Tracking
  12. End-to-end representation learning for Correlation Filter based tracking
  13. Context-Aware Correlation Filter Tracking
  14. CREST: Convolutional Residual Learning for Visual Tracking
  15. 中科院自动化所胡卫明老师组的博士生王强整理的一些benchmark结果以及论文汇总(好多是参考他的,再次感谢)
  16. Benchmark Results of Correlation Filters, 相关滤波这几年在tracking领域应用非常广,效果也很惊人,这是总结的近几年相关的文章,上面进阶文章大多数都有了,但是这个Github链接 把CF 变形的方法都罗列分类的很齐全,建议收藏。
    - [https://github.com/HakaseH/CF_benchmark_results]

领域专家

  1. Ming-Hsuan Yang[http://faculty.ucmerced.edu/mhyang/]
  • Ming-HsuanYang视觉跟踪当之无愧第一人,后面的人基本上都和其有合作关系,他引已上万
  • 代表作: - Robust Visual Tracking via Consistent Low-Rank Sparse Learning - FCT,IJCV2014:Fast Compressive Tracking - RST,PAMI2014:Robust Superpixel Tracking; SPT,ICCV2011, Superpixeltracking - SVD,TIP2014:Learning Structured Visual Dictionary for Object Tracking - ECCV2014: Spatio temporalBackground Subtraction Using Minimum Spanning Tree and Optical Flow - PAMI2011:Robust Object Tracking with Online Multiple Instance Learning - MIT,CVPR2009: Visual tracking with online multiple instance learning - IJCV2008: Incremental Learning for Robust Visual Tracking
  1. Haibin Ling
  2. Huchuan Lu
  3. Hongdong Li
  4. Lei Zhang
  1. Xiaogang Wang
  1. Matej Kristan
  1. João F. Henriques
  2. Martin Danelljan
  1. Kaihua Zhang
  1. Hamed Kiani
  1. Luca Bertinetto
  1. Tianzhu Zhang

datasets

  1. OTB
  2. VOT

初步版本,水平有限,有错误或者不完善的地方,欢迎大家提建议和补充,会一直保持更新,本文为专知内容组原创内容,未经允许不得转载,如需转载请发送邮件至fangquanyi@gmail.com 或 联系微信专知小助手(Rancho_Fang)

敬请关注http://www.zhuanzhi.ai 和关注专知公众号,获取第一手AI相关知识

前往荟萃

VIP内容

题目

无人机计算机视觉:过去、现在与未来,Vision Meets Drones: Past, Present and Future

关键字

无人机,计算机视觉,航拍,深度学习,目标检测与跟踪

简介

无人机(或称通用无人机)装备有摄像头,在农业、航空摄影、快速投送和监视等领域得到了广泛的应用。因此,对无人机视觉数据的自动理解要求越来越高,使得计算机视觉与无人机的关系越来越密切。为了促进和跟踪目标检测和跟踪算法的发展,我们结合2018年欧洲计算机视觉会议(ECCV2018)和2019年IEEE国际计算机视觉会议(ICCV2019)组织了两次挑战研讨会,吸引了全球100多个团队。我们提供了一个大规模无人机捕获数据集VisDrone,它包括四个轨迹,即(1)图像目标检测,(2)视频目标检测,(3)单目标跟踪,和(4)多目标跟踪。本文首先回顾了目标检测与跟踪数据集和基准,并讨论了使用完全手动注释来收集基于大规模数据集的目标检测与跟踪数据集所面临的挑战。之后,我们描述了我们的VisDrone数据集,它捕获了中国从北到南的14个不同城市的不同城市/郊区。VisDrone是发布的最大的此类数据探测器,能够在无人机平台上对视觉分析算法进行广泛的评估和研究。本文详细分析了无人机大规模目标检测与跟踪领域的现状,总结了其面临的挑战,并提出了未来的发展方向和改进方向。我们预计该基准将极大地推动无人机平台视频分析的研发。

作者

Pengfei Zhu∗, Longyin Wen∗, Dawei Du∗, Xiao Bian, Qinghua Hu, Haibin Ling

成为VIP会员查看完整内容
4+
0+
更多VIP内容

最新论文

The study of mouse social behaviours has been increasingly undertaken in neuroscience research. However, automated quantification of mouse behaviours from the videos of interacting mice is still a challenging problem, where object tracking plays a key role in locating mice in their living spaces. Artificial markers are often applied for multiple mice tracking, which are intrusive and consequently interfere with the movements of mice in a dynamic environment. In this paper, we propose a novel method to continuously track several mice and individual parts without requiring any specific tagging. Firstly, we propose an efficient and robust deep learning based mouse part detection scheme to generate part candidates. Subsequently, we propose a novel Bayesian Integer Linear Programming Model that jointly assigns the part candidates to individual targets with necessary geometric constraints whilst establishing pair-wise association between the detected parts. There is no publicly available dataset in the research community that provides a quantitative test-bed for the part detection and tracking of multiple mice, and we here introduce a new challenging Multi-Mice PartsTrack dataset that is made of complex behaviours and actions. Finally, we evaluate our proposed approach against several baselines on our new datasets, where the results show that our method outperforms the other state-of-the-art approaches in terms of accuracy.

0+
0+
下载
预览
更多最新论文
Top