Surgical data science is a new research field that aims to observe all aspects of the patient treatment process in order to provide the right assistance at the right time. Due to the breakthrough successes of deep learning-based solutions for automatic image annotation, the availability of reference annotations for algorithm training is becoming a major bottleneck in the field. The purpose of this paper was to investigate the concept of self-supervised learning to address this issue. Our approach is guided by the hypothesis that unlabeled video data can be used to learn a representation of the target domain that boosts the performance of state-of-the-art machine learning algorithms when used for pre-training. Core of the method is an auxiliary task based on raw endoscopic video data of the target domain that is used to initialize the convolutional neural network (CNN) for the target task. In this paper, we propose the re-colorization of medical images with a generative adversarial network (GAN)-based architecture as auxiliary task. A variant of the method involves a second pre-training step based on labeled data for the target task from a related domain. We validate both variants using medical instrument segmentation as target task. The proposed approach can be used to radically reduce the manual annotation effort involved in training CNNs. Compared to the baseline approach of generating annotated data from scratch, our method decreases exploratively the number of labeled images by up to 75% without sacrificing performance. Our method also outperforms alternative methods for CNN pre-training, such as pre-training on publicly available non-medical or medical data using the target task (in this instance: segmentation). As it makes efficient use of available (non-)public and (un-)labeled data, the approach has the potential to become a valuable tool for CNN (pre-)training.


翻译:外科数据科学是一个新的研究领域,目的是观察患者治疗过程的所有方面,以便适时提供正确的帮助。由于基于深层次学习的自动图像批注解决方案取得了突破性的成功,对算法培训的参考说明正在成为该领域的一个主要瓶颈。本文的目的是调查自我监督学习来解决这一问题的概念。我们的方法以一个假设为指导,即可以使用未贴标签的视频数据来学习目标域的表示,从而在培训前提供正确的帮助。由于基于深层次学习的图像自动图像注释化解决方案取得了突破性的成功,因此,对算法培训培训的推荐性说明性说明性说明性说明性说明性说明性说明性培训的提供性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性。在本文中,我们建议将医疗图象的重新染色化,以基因化的对抗性网络(GAN)为基础的结构结构作为辅助性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性说明性

4
下载
关闭预览

相关内容

可解释强化学习,Explainable Reinforcement Learning: A Survey
专知会员服务
126+阅读 · 2020年5月14日
因果图,Causal Graphs,52页ppt
专知会员服务
238+阅读 · 2020年4月19日
100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
161+阅读 · 2020年3月18日
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
25+阅读 · 2019年5月18日
19篇ICML2019论文摘录选读!
专知
28+阅读 · 2019年4月28日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
无监督元学习表示学习
CreateAMind
25+阅读 · 2019年1月4日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
A Survey on Edge Intelligence
Arxiv
49+阅读 · 2020年3月26日
Arxiv
7+阅读 · 2018年5月23日
VIP会员
相关资讯
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
25+阅读 · 2019年5月18日
19篇ICML2019论文摘录选读!
专知
28+阅读 · 2019年4月28日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
无监督元学习表示学习
CreateAMind
25+阅读 · 2019年1月4日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
Top
微信扫码咨询专知VIP会员