Conventional automated decision-support systems often prioritize predictive accuracy, overlooking the complexities of real-world settings where stakeholders' preferences may diverge or conflict. This can lead to outcomes that disadvantage vulnerable groups and erode trust in algorithmic processes. Participatory AI approaches aim to address these issues but remain largely context-specific, limiting their broader applicability and scalability. To address these gaps, we propose a participatory framework that reframes decision-making as a multi-stakeholder learning and optimization problem. Our modular, model-agnostic approach builds on the standard machine learning training pipeline to fine-tune user-provided prediction models and evaluate decision strategies, including compromise functions that mediate stakeholder trade-offs. A synthetic scoring mechanism aggregates user-defined preferences across multiple metrics, ranking strategies and selecting an optimal decision-maker to generate actionable recommendations that jointly optimize performance, fairness, and domain-specific goals. Empirical validation on two high-stakes case studies demonstrates the versatility of the framework and its promise as a more accountable, context-aware alternative to prediction-centric pipelines for socially impactful deployments.
翻译:传统的自动化决策支持系统往往优先考虑预测准确性,而忽视了现实场景中利益相关者偏好可能分歧或冲突的复杂性。这可能导致结果对弱势群体不利,并削弱对算法流程的信任。参与式人工智能方法旨在解决这些问题,但其应用仍主要局限于特定情境,限制了更广泛的适用性和可扩展性。为弥补这些不足,我们提出一种参与式框架,将决策重新定义为多利益相关者的学习与优化问题。我们的模块化、模型无关的方法基于标准机器学习训练流程,对用户提供的预测模型进行微调,并评估决策策略,包括协调利益相关者权衡的折中函数。一种合成评分机制聚合用户在多个指标上定义的综合偏好,对策略进行排序并选择最优决策者,从而生成可操作的建议,共同优化性能、公平性及领域特定目标。在两个高风险案例研究中的实证验证表明,该框架具有广泛适用性,并有望作为以预测为中心的流程的更负责任、更具情境感知能力的替代方案,适用于具有社会影响力的部署场景。