Automatic prediction of emotion promises to revolutionise human-computer interaction. Recent trends involve fusion of multiple modalities - audio, visual, and physiological - to classify emotional state. However, practical considerations 'in the wild' limit collection of this physiological data to commoditised heartbeat sensors. Furthermore, real-world applications often require some measure of uncertainty over model output. We present here an end-to-end deep learning model for classifying emotional valence from unimodal heartbeat data. We further propose a Bayesian framework for modelling uncertainty over valence predictions, and describe a procedure for tuning output according to varying demands on confidence. We benchmarked our framework against two established datasets within the field and achieved peak classification accuracy of 90%. These results lay the foundation for applications of affective computing in real-world domains such as healthcare, where a high premium is placed on non-invasive collection of data, and predictive certainty.
翻译:对情感的自动预测有可能使人类-计算机互动发生革命性。 最近的趋势包括将多种模式 — 音频、视觉和生理 — 融合为情感状态分类。 然而, “ 野外” 实际考虑将这种生理数据的收集限制为逗状心跳传感器。 此外, 真实世界应用往往要求对模型输出进行某种程度的不确定性。 我们在这里提出了一个从单式心跳数据中对情感价值进行分类的端到端深学习模式。 我们进一步提议建立一个贝叶西亚框架, 用于模拟对价值预测的不确定性, 并描述根据对信任的不同要求调整产出的程序。 我们根据实地的两个既定数据集来衡量我们的框架, 并实现了90%的最高分类精确度。 这些结果为在健康等现实世界领域应用影响性计算奠定了基础, 在那里, 高度重视非侵入性的数据收集和预测性确定性。