Discrete decision tasks in machine learning exhibit a fundamental misalignment between training and inference: models are optimized with continuous-valued outputs but evaluated using discrete predictions. This misalignment arises from the discontinuity of discretization operations, which prevents decision behavior from being directly incorporated into gradient-based optimization. To address this issue, we propose a theoretically grounded framework termed the Binarization-Aware Adjuster (BAA), which embeds binarization characteristics into continuous optimization. The framework is built upon the Distance Weight Function (DWF), which modulates loss contributions according to prediction correctness and proximity to the decision threshold, thereby aligning optimization emphasis with decision-critical regions while remaining compatible with standard learning pipelines. We apply the proposed BAA framework to the edge detection (ED) task, a representative binary decision problem. Experimental results on representative models and datasets show that incorporating BAA into optimization leads to consistent performance improvements, supporting its effectiveness. Overall, this work establishes a principled approach for aligning continuous optimization with discrete decision behavior, with its effectiveness demonstrated in a concrete application setting.
翻译:机器学习中的离散决策任务存在训练与推断之间的根本性不对齐:模型通过连续值输出进行优化,却使用离散预测进行评估。这种不对齐源于离散化操作的不连续性,使得决策行为无法直接融入基于梯度的优化过程。为解决该问题,我们提出一个理论完备的框架——二值化感知调节器(BAA),该框架将二值化特性嵌入连续优化过程。该框架建立在距离权重函数(DWF)基础上,该函数根据预测正确性及与决策阈值的接近程度调节损失贡献,从而使优化重点与决策关键区域对齐,同时保持与标准学习流程的兼容性。我们将所提出的BAA框架应用于边缘检测(ED)任务——一个典型的二元决策问题。在代表性模型和数据集上的实验结果表明,将BAA融入优化过程能带来一致的性能提升,验证了其有效性。总体而言,本研究建立了将连续优化与离散决策行为对齐的原则性方法,并在具体应用场景中证明了其有效性。