Despite the rise of deep learning in numerous areas of computer vision and image processing, iris recognition has not benefited considerably from these trends so far. Most of the existing research on deep iris recognition is focused on new models for generating discriminative and robust iris representations and relies on methodologies akin to traditional iris recognition pipelines. Hence, the proposed models do not approach iris recognition in an end-to-end manner, but rather use standard heuristic iris segmentation (and unwrapping) techniques to produce normalized inputs for the deep learning models. However, because deep learning is able to model very complex data distributions and nonlinear data changes, an obvious question arises. How important is the use of traditional segmentation methods in a deep learning setting? To answer this question, we present in this paper an empirical analysis of the impact of iris segmentation on the performance of deep learning models using a simple two stage pipeline consisting of a segmentation and a recognition step. We evaluate how the accuracy of segmentation influences recognition performance but also examine if segmentation is needed at all. We use the CASIA Thousand and SBVPI datasets for the experiments and report several interesting findings.
翻译:尽管在计算机视觉和图像处理等许多领域有了深入的学习,但目前对虹膜的认知并没有从这些趋势中获得很大好处。关于深虹膜识别的现有研究大多侧重于产生具有歧视性和强势的虹膜代表的新模式,并依赖类似于传统虹膜识别管道的方法。因此,拟议的模型并不以端到端的方式对待虹膜识别,而是使用标准的超光化的虹膜分割(和解包)技术为深层学习模型提供正常的投入。然而,由于深层学习能够模拟非常复杂的数据分布和非线性数据变化,因此产生了一个显而易见的问题。在深层学习环境中使用传统分解方法的重要性如何?为了回答这一问题,我们在本文件中用由分解和确认步骤组成的简单两个阶段管道,对离分解对深层模型的绩效的影响进行实证分析。我们用CSIA Thousand和SBVIPI数据集来进行实验和报告若干有趣的结果。