Log-based anomaly detection (LAD) is critical for ensuring the reliability of large-scale distributed systems. However, most existing LAD approaches assume centralized training, which is often impractical due to privacy constraints and the decentralized nature of system logs. While federated learning (FL) offers a promising alternative, there is a lack of dedicated testbeds tailored to the needs of LAD in federated settings. To address this, we present FedLAD, a unified platform for training and evaluating LAD models under FL constraints. FedLAD supports plug-and-play integration of diverse LAD models, benchmark datasets, and aggregation strategies, while offering runtime support for validation logging (self-monitoring), parameter tuning (self-configuration), and adaptive strategy control (self-adaptation). By enabling reproducible and scalable experimentation, FedLAD bridges the gap between FL frameworks and LAD requirements, providing a solid foundation for future research. Project code is publicly available at: https://github.com/AA-cityu/FedLAD.
翻译:基于日志的异常检测对于确保大规模分布式系统的可靠性至关重要。然而,现有的大多数日志异常检测方法假设集中式训练,这在隐私约束和系统日志去中心化特性的背景下往往不切实际。尽管联邦学习提供了一种有前景的替代方案,但目前缺乏专门针对联邦环境下日志异常检测需求定制的测试平台。为此,我们提出了FedLAD,这是一个在联邦学习约束下训练和评估日志异常检测模型的统一平台。FedLAD支持多样化的日志异常检测模型、基准数据集和聚合策略的即插即用集成,同时为验证日志记录(自监控)、参数调优(自配置)和自适应策略控制(自适应)提供运行时支持。通过实现可复现和可扩展的实验,FedLAD弥合了联邦学习框架与日志异常检测需求之间的鸿沟,为未来研究奠定了坚实基础。项目代码已公开于:https://github.com/AA-cityu/FedLAD。