In recent years, we observe an increasing amount of software with machine learning components being deployed. This poses the question of quality assurance for such components: how can we validate whether specified requirements are fulfilled by a machine learned software? Current testing and verification approaches either focus on a single requirement (e.g., fairness) or specialize on a single type of machine learning model (e.g., neural networks). In this paper, we propose property-driven testing of machine learning models. Our approach MLCheck encompasses (1) a language for property specification, and (2) a technique for systematic test case generation. The specification language is comparable to property-based testing languages. Test case generation employs advanced verification technology for a systematic, property-dependent construction of test suites, without additional user-supplied generator functions. We evaluate MLCheck using requirements and data sets from three different application areas (software discrimination, learning on knowledge graphs and security). Our evaluation shows that despite its generality MLCheck can even outperform specialised testing approaches while having a comparable runtime.
翻译:近年来,我们观察到越来越多的软件在安装机器学习组件,这提出了这些组件的质量保证问题:我们如何验证机器学习软件是否满足了特定要求?目前的测试和核查方法要么侧重于单一要求(例如公平性),要么专门开发单一类型的机器学习模型(例如神经网络)。在本文中,我们建议对机器学习模型进行由属性驱动的测试。我们的MLCheck方法包括(1) 一种财产规格语言,(2) 系统测试案例生成技术。规格语言与基于财产的测试语言相当。测试案例生成采用先进的核查技术,系统、以财产为基础建造测试套件,而没有额外的用户提供的发电机功能。我们使用三个不同应用领域(软件歧视、知识图表和安全学习)的要求和数据集来评估MLC检查。我们的评估表明,尽管MLCgreck具有通用性,但即使具有类似的运行时间,也能够超越专门测试方法。