The performance of machine learning (ML) models often deteriorates when the underlying data distribution changes over time, a phenomenon known as data distribution drift. When this happens, ML models need to be retrained and redeployed. ML Operations (MLOps) is often manual, i.e., humans trigger the process of model retraining and redeployment. In this work, we present an automated MLOps pipeline designed to address neural network classifier retraining in response to significant data distribution changes. Our MLOps pipeline employs multi-criteria statistical techniques to detect distribution shifts and triggers model updates only when necessary, ensuring computational efficiency and resource optimization. We demonstrate the effectiveness of our framework through experiments on several benchmark anomaly detection data sets, showing significant improvements in model accuracy and robustness compared to traditional retraining strategies. Our work provides a foundation for deploying more reliable and adaptive ML systems in dynamic real-world settings, where data distribution changes are common.
翻译:当底层数据分布随时间变化时,机器学习模型的性能通常会下降,这种现象称为数据分布漂移。此时,机器学习模型需要进行再训练和重新部署。当前的机器学习运维过程通常依赖人工操作,即由人工触发模型再训练与重新部署流程。本研究提出一种自动化MLOps管道,旨在应对显著数据分布变化引发的神经网络分类器再训练需求。该MLOps管道采用多准则统计技术检测分布偏移,仅在必要时触发模型更新,从而确保计算效率与资源优化。通过在多个基准异常检测数据集上的实验,我们验证了该框架的有效性,结果显示相较于传统再训练策略,模型准确性与鲁棒性均获得显著提升。本研究为在数据分布频繁变化的动态现实场景中部署更可靠、更具适应性的机器学习系统奠定了基础。