Large software systems tune hundreds of 'constants' to optimize their runtime performance. These values are commonly derived through intuition, lab tests, or A/B tests. A 'one-size-fits-all' approach is often sub-optimal as the best value depends on runtime context. In this paper, we provide an experimental approach to replace constants with learned contextual functions for Skype - a widely used real-time communication (RTC) application. We present Resonance, a system based on contextual bandits (CB). We describe experiences from three real-world experiments: applying it to the audio, video, and transport components in Skype. We surface a unique and practical challenge of performing machine learning (ML) inference in large software systems written using encapsulation principles. Finally, we open-source FeatureBroker, a library to reduce the friction in adopting ML models in such development environments
翻译:大型软件系统对数百个“ Constants” 进行调控, 以优化运行时间性能。 这些值通常通过直觉、 实验室测试或 A/B 测试得出。 “ 一刀切” 方法往往不尽理想, 因为最佳值取决于运行时间背景。 在本文中, 我们提供了一个实验性方法, 以Skype( 广泛使用的实时通信( RTC) 应用程序) 的学习背景功能取代常数。 我们介绍了基于背景强盗( CB) 的系统Recoance 。 我们描述了三个现实世界实验的经验: 将它应用到Skype 的音频、 视频和运输组件中。 我们展示了在使用封装原则书写的大型软件系统中进行机器学习( ML) 推断的独特而实际的挑战 。 最后, 我们的开源FetaticBroker, 是一个用于减少在这种开发环境中采用 ML 模型摩擦的图书馆 。