Human language processing relies on the brain's capacity for predictive inference. We present a machine learning framework for decoding neural (EEG) responses to dynamic visual language stimuli in Deaf signers. Using coherence between neural signals and optical flow-derived motion features, we construct spatiotemporal representations of predictive neural dynamics. Through entropy-based feature selection, we identify frequency-specific neural signatures that differentiate interpretable linguistic input from linguistically disrupted (time-reversed) stimuli. Our results reveal distributed left-hemispheric and frontal low-frequency coherence as key features in language comprehension, with experience-dependent neural signatures correlating with age. This work demonstrates a novel multimodal approach for probing experience-driven generative models of perception in the brain.
翻译:人类语言处理依赖于大脑的预测推理能力。本文提出一种机器学习框架,用于解码聋人手语者对动态视觉语言刺激的神经(脑电图)响应。通过神经信号与光流衍生运动特征之间的相干性,我们构建了预测神经动态的时空表征。借助基于熵的特征选择方法,我们识别出能够区分可解释语言输入与语言干扰(时间反转)刺激的特定频率神经特征。研究结果表明,分布式的左半球及前额叶低频相干性是语言理解的关键特征,且具有经验依赖性的神经特征与年龄相关。这项工作展示了一种新颖的多模态方法,用于探究大脑中经验驱动的感知生成模型。