Video relevance prediction is one of the most important tasks for online streaming service. Given the relevance of videos and viewer feedbacks, the system can provide personalized recommendations, which will help the user discover more content of interest. In most online service, the computation of video relevance table is based on users' implicit feedback, e.g. watch and search history. However, this kind of method performs poorly for "cold-start" problems - when a new video is added to the library, the recommendation system needs to bootstrap the video relevance score with very little user behavior known. One promising approach to solve it is analyzing video content itself, i.e. predicting video relevance by video frame, audio, subtitle and metadata. In this paper, we describe a challenge on Content-based Video Relevance Prediction (CBVRP) that is hosted by Hulu in the ACM Multimedia Conference 2018. In this challenge, Hulu drives the study on an open problem of exploiting content characteristics directly from original video for video relevance prediction. We provide massive video assets and ground truth relevance derived from our really system, to build up a common platform for algorithm development and performance evaluation.
翻译:视频关联性预测是在线流流服务最重要的任务之一。 鉴于视频和观众反馈的相关性, 该系统可以提供个性化的建议, 这将有助于用户发现更多感兴趣的内容。 在大多数在线服务中, 视频关联性表的计算基于用户的隐含反馈, 例如手表和搜索历史。 然而, 对于“ 冷却启动” 问题, 此类方法表现不佳 — — 当新视频添加到图书馆时, 建议系统需要用鲜为人知的用户行为来锁定视频关联性评分。 解决这一问题的一个有希望的方法是分析视频内容本身, 即通过视频框架、 音频、 字幕 和 元数据预测视频关联性。 在本文中, 我们描述了由Huluu在2018年ACM多媒体会议上主持的一个基于内容的视频相关性预测( CBVRP) 的挑战。 在这项挑战中, Hulu 推动关于直接从原始视频中利用内容特征进行视频关联性预测这一公开问题的研究。 我们提供大量视频资产和来自我们真实系统的事实相关性, 以构建一个共同的算法发展和绩效评估平台。