We investigate whether and where multi-task learning (MTL) can improve performance on NLP problems related to argumentation mining (AM), in particular argument component identification. Our results show that MTL performs particularly well (and better than single-task learning) when little training data is available for the main task, a common scenario in AM. Our findings challenge previous assumptions that conceptualizations across AM datasets are divergent and that MTL is difficult for semantic or higher-level tasks.
翻译:我们调查了多任务学习(MTL)是否能够和在哪些地方改善与论证采矿(AM)有关的NLP问题的业绩,特别是辨别参数组成部分。我们的结果表明,当主要任务的培训数据少时,MTL表现特别好(比单任务学习好 ), 也就是AM的常见情景。 我们的调查结论挑战了先前的假设,即AM数据集的概念存在差异,MTL难以完成语义或更高层次的任务。