Identifying a short segment in a long video that semantically matches a text query is a challenging task that has important application potentials in language-based video search, browsing, and navigation. Typical retrieval systems respond to a query with either a whole video or a pre-defined video segment, but it is challenging to localize undefined segments in untrimmed and unsegmented videos where exhaustively searching over all possible segments is intractable. The outstanding challenge is that the representation of a video must account for different levels of granularity in the temporal domain. To tackle this problem, we propose the HierArchical Multi-Modal EncodeR (HAMMER) that encodes a video at both the coarse-grained clip level and the fine-grained frame level to extract information at different scales based on multiple subtasks, namely, video retrieval, segment temporal localization, and masked language modeling. We conduct extensive experiments to evaluate our model on moment localization in video corpus on ActivityNet Captions and TVR datasets. Our approach outperforms the previous methods as well as strong baselines, establishing new state-of-the-art for this task.
翻译:在一个长长的视频中找到一个短片段, 它在语言视频搜索、 浏览和导航中具有重要的应用潜力。 典型的检索系统对一个整段视频或预设视频段的查询作出反应, 但将未加剪切和未分割的视频段段进行本地化是困难的, 对所有可能的段段进行彻底搜索是棘手的。 突出的挑战是, 视频的显示必须说明时间域域内不同程度的颗粒。 要解决这个问题, 我们建议 HierArgicic 多模式编码R (HAMMER) 将一段视频编码为粗糙的剪辑级和精细刻的框级, 以便根据多个子任务, 即视频检索、 段间本地化和遮蔽的语言模型, 在不同尺度上提取信息。 我们进行广泛的实验, 以评价我们在活动网络卡普和电视数据集的瞬间本地化模式。 我们的方法比以往的方法更像一个强大的基线, 建立新的状态任务模型 。