Automated audio captioning models frequently produce overconfident predictions regardless of semantic accuracy, limiting their reliability in deployment. This deficiency stems from two factors: evaluation metrics based on n-gram overlap that fail to capture semantic correctness, and the absence of calibrated confidence estimation. We present a framework that addresses both limitations by integrating confidence prediction into audio captioning and redefining correctness through semantic similarity. Our approach augments a Whisper-based audio captioning model with a learned confidence prediction head that estimates uncertainty from decoder hidden states. We employ CLAP audio-text embeddings and sentence transformer similarities (FENSE) to define semantic correctness, enabling Expected Calibration Error (ECE) computation that reflects true caption quality rather than surface-level text overlap. Experiments on Clotho v2 demonstrate that confidence-guided beam search with semantic evaluation achieves dramatically improved calibration (CLAP-based ECE of 0.071) compared to greedy decoding baselines (ECE of 0.488), while simultaneously improving caption quality across standard metrics. Our results establish that semantic similarity provides a more meaningful foundation for confidence calibration in audio captioning than traditional n-gram metrics.
翻译:自动音频描述模型经常产生过度自信的预测,而忽略语义准确性,这限制了其在部署中的可靠性。这一缺陷源于两个因素:基于n-gram重叠的评估指标未能捕捉语义正确性,以及缺乏校准的置信度估计。我们提出了一个框架,通过将置信度预测集成到音频描述中,并基于语义相似性重新定义正确性,以解决这两个局限性。我们的方法在基于Whisper的音频描述模型上增加了一个学习的置信度预测头,该头从解码器隐藏状态估计不确定性。我们采用CLAP音频-文本嵌入和句子变换器相似度(FENSE)来定义语义正确性,从而能够计算反映真实描述质量而非表面文本重叠的期望校准误差(ECE)。在Clotho v2数据集上的实验表明,与贪婪解码基线(ECE为0.488)相比,结合语义评估的置信度引导波束搜索实现了显著改进的校准(基于CLAP的ECE为0.071),同时在不同标准指标上提升了描述质量。我们的结果表明,语义相似性为音频描述中的置信度校准提供了比传统n-gram指标更有意义的基础。