Cross-modal systems trained on 2D visual inputs are presented with a dimensional shift when processing 3D scenes. An in-scene camera bridges the dimensionality gap but requires learning a control module. We introduce a new method that improves multivariate mutual information estimates by regret minimisation with derivative-free optimisation. Our algorithm enables off-the-shelf cross-modal systems trained on 2D visual inputs to adapt online to object occlusions and differentiate features. The pairing of expressive measures and value-based optimisation assists control of an in-scene camera to learn directly from the noisy outputs of vision-language models. The resulting pipeline improves performance in cross-modal tasks on multi-object 3D scenes without resorting to pretraining or finetuning.
翻译:在三维场景处理中,基于二维视觉输入训练的跨模态系统面临维度偏移问题。场景内相机虽能弥合维度差异,但需学习控制模块。本文提出一种新方法,通过无导数优化的遗憾最小化改进多元互信息估计。该算法使基于二维视觉输入训练的现成跨模态系统能够在线适应物体遮挡并区分特征。表达性度量与基于值优化的结合,辅助场景内相机直接从视觉语言模型的噪声输出中学习控制策略。所构建的流程无需预训练或微调,即可在多物体三维场景的跨模态任务中提升性能。