Automated prediction of valence, one key feature of a person's emotional state, from individuals' personal narratives may provide crucial information for mental healthcare (e.g. early diagnosis of mental diseases, supervision of disease course, etc.). In the Interspeech 2018 ComParE Self-Assessed Affect challenge, the task of valence prediction was framed as a three-class classification problem using 8 seconds fragments from individuals' narratives. As such, the task did not allow for exploring contextual information of the narratives. In this work, we investigate the intrinsic information from multiple narratives recounted by the same individual in order to predict their current state-of-mind. Furthermore, with generalizability in mind, we decided to focus our experiments exclusively on textual information as the public availability of audio narratives is limited compared to text. Our hypothesis is, that context modeling might provide insights about emotion triggering concepts (e.g. events, people, places) mentioned in the narratives that are linked to an individual's state of mind. We explore multiple machine learning techniques to model narratives. We find that the models are able to capture inter-individual differences, leading to more accurate predictions of an individual's emotional state, as compared to single narratives.
翻译:对价值的自动预测是一个人情绪状态的一个关键特征,从个人的个人叙述中可以从个人情感状态中获得关键的信息,从个人的个人叙述中可以提供精神保健的关键信息(例如早期诊断精神疾病、疾病疗程监督等)。 在2018年Interspeech ComPare自评自评Affect 挑战中,价值预测的任务被设计为一个三层分类问题,使用个人叙述的8秒片段。因此,任务不允许探索叙述的背景资料。在这项工作中,我们调查同一个人为预测其当前状态而重复的多部叙述的内在信息。此外,考虑到可概括性,我们决定把实验的重点放在文本信息上,因为与文本相比,公开提供音频叙述的机会有限。我们的假设是,背景建模可能提供感触动概念的洞察力(例如事件、人、地点)与个人心理状态相联系。我们探索了多个机器学习技术,以预测其目前的轻度状态。我们发现模型能够比较个人情感状态的差异。我们发现,模型能够比较单一的感知差异。