LSTMs have a proven track record in analyzing sequential data. But what about unordered instance bags, as found under a Multiple Instance Learning (MIL) setting? While not often used for this, we show LSTMs excell under this setting too. In addition, we show thatLSTMs are capable of indirectly capturing instance-level information us-ing only bag-level annotations. Thus, they can be used to learn instance-level models in a weakly supervised manner. Our empirical evaluation on both simplified (MNIST) and realistic (Lookbook and Histopathology) datasets shows that LSTMs are competitive with or even surpass state-of-the-art methods specially designed for handling specific MIL problems. Moreover, we show that their performance on instance-level prediction is close to that of fully-supervised methods.
翻译:LSTMS在分析相继数据方面有经实践证明的记录。但是,在多例学习(MIL)设置下发现的未排序实例包呢?我们虽然不经常在这种设置下显示LSTMs Experl 。此外,我们证明LSTMs能够间接地捕捉实例级信息,我们只能用包级注释。因此,它们可以用薄弱的监督方式学习实例级模型。我们对简化(MNIST)和现实(Lookbook and Histopathlogy)数据集的经验评估表明,LSTMs与专门处理特定MIL问题的最新方法相比,甚至比它们更具有竞争力。此外,我们显示,它们在实例级预测方面的表现接近于完全监督的方法。