We introduce Transductive Infomation Maximization (TIM) for few-shot learning. Our method maximizes the mutual information between the query features and their label predictions for a given few-shot task, in conjunction with a supervision loss based on the support set. We motivate our transductive loss by deriving a formal relation between the classification accuracy and mutual-information maximization. Furthermore, we propose a new alternating-direction solver, which substantially speeds up transductive inference over gradient-based optimization, while yielding competitive accuracy. We also provide a convergence analysis of our solver based on Zangwill's theory and bound-optimization arguments. TIM inference is modular: it can be used on top of any base-training feature extractor. Following standard transductive few-shot settings, our comprehensive experiments demonstrate that TIM outperforms state-of-the-art methods significantly across various datasets and networks, while used on top of a fixed feature extractor trained with simple cross-entropy on the base classes, without resorting to complex meta-learning schemes. It consistently brings between 2 % and 5 % improvement in accuracy over the best performing method, not only on all the well-established few-shot benchmarks but also on more challenging scenarios, with random tasks, domain shift and larger numbers of classes, as in the recently introduced META-DATASET. Our code is publicly available at https://github.com/mboudiaf/TIM. We also publicly release a standalone PyTorch implementation of META-DATASET, along with additional benchmarking results, at https://github.com/mboudiaf/pytorch-meta-dataset.
翻译:我们为少许学习引入了 Transmination Information 最大化( TIM ) 。 我们的方法将查询功能和标签预测之间的相互信息最大化, 并结合基于支持集的监管损失 。 我们通过在分类精度和相互信息最大化之间建立正式关系来激励我们的传输损失。 此外, 我们提出一个新的交替方向求解器, 大大加快跨度优化的传输推导, 同时产生竞争性的准确性 。 我们还根据Zangwill 的理论和约束性优化参数, 对我们的解答器进行趋同分析 。 TIM 推断是模块化的: 它可以在任何基础培训特性提取器的顶端使用 。 我们的全面实验显示, TIM 在许多数据集和网络之间, 大大超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超时超, 。