Cognitively inspired NLP leverages human-derived data to teach machines about language processing mechanisms. Recently, neural networks have been augmented with behavioral data to solve a range of NLP tasks spanning syntax and semantics. We are the first to exploit neuroscientific data, namely electroencephalography (EEG), to inform a neural attention model about language processing of the human brain. The challenge in working with EEG data is that features are exceptionally rich and need extensive pre-processing to isolate signals specific to text processing. We devise a method for finding such EEG features to supervise machine attention through combining theoretically motivated cropping with random forest tree splits. After this dimensionality reduction, the pre-processed EEG features are capable of distinguishing two reading tasks retrieved from a publicly available EEG corpus. We apply these features to regularise attention on relation classification and show that EEG is more informative than strong baselines. This improvement depends on both the cognitive load of the task and the EEG frequency domain. Hence, informing neural attention models with EEG signals is beneficial but requires further investigation to understand which dimensions are the most useful across NLP tasks.
翻译:认知启发的自然语言处理利用人类衍生的数据来教导机器理解语言处理机制。近年来,神经网络已通过行为数据增强,以解决涵盖句法和语义的一系列自然语言处理任务。本研究首次利用神经科学数据——即脑电图(EEG),为神经注意力模型提供关于人脑语言处理的信息。处理脑电图数据的挑战在于其特征异常丰富,需要大量预处理以分离特定于文本处理的信号。我们设计了一种方法,通过将理论驱动的数据裁剪与随机森林树分裂相结合,寻找能够监督机器注意力的脑电图特征。经过降维处理后,预处理后的脑电图特征能够区分从公开脑电图语料库中检索到的两种阅读任务。我们将这些特征应用于关系分类中的注意力正则化,并证明脑电图数据比强基线模型更具信息量。这种改进既取决于任务的认知负荷,也取决于脑电图的频域特征。因此,用脑电图信号指导神经注意力模型具有优势,但需要进一步研究以确定哪些维度在自然语言处理任务中最具普适性。