Deep learning has achieved great success in recognizing video actions, but the collection and annotation of training data are still quite laborious, which mainly lies in two aspects: (1) the amount of required annotated data is large; (2) temporally annotating the location of each action is time-consuming. Works such as few-shot learning or untrimmed video recognition have been proposed to handle either one aspect or the other. However, very few existing works can handle both issues simultaneously. In this paper, we target a new problem, Annotation-Efficient Video Recognition, to reduce the requirement of annotations for both large amount of samples and the action location. Such problem is challenging due to two aspects: (1) the untrimmed videos only have weak supervision; (2) video segments not relevant to current actions of interests (background, BG) could contain actions of interests (foreground, FG) in novel classes, which is a widely existing phenomenon but has rarely been studied in few-shot untrimmed video recognition. To achieve this goal, by analyzing the property of BG, we categorize BG into informative BG (IBG) and non-informative BG (NBG), and we propose (1) an open-set detection based method to find the NBG and FG, (2) a contrastive learning method to learn IBG and distinguish NBG in a self-supervised way, and (3) a self-weighting mechanism for the better distinguishing of IBG and FG. Extensive experiments on ActivityNet v1.2 and ActivityNet v1.3 verify the rationale and effectiveness of the proposed methods.
翻译:深层学习在认识视频行动方面取得了巨大成功,但培训数据的收集和批注仍然非常困难,主要有两个方面:(1) 所需附加说明数据的数量很大;(2) 时间性说明每个行动的位置很费时;(2) 提议在新颖的班级(背面、BG)进行少发的学习或未经剪接的视频识别,以处理一个方面或另一个方面;然而,很少有现有作品能够同时处理这两个问题;在本文件中,我们针对一个新问题,即说明-质量视频识别,以减少对大量样本和行动地点的说明要求;这个问题具有挑战性,因为有两个方面:(1) 未剪接的视频仅具有薄弱的监督力;(2) 与当前利益行动无关的视频部分(背面、BG)可包含利益行动(前方、FG),这是一个广泛存在的现象,但很少研究过几张未加剪接的视频识别。