Fine-tuning pre-trained language models (PTLMs), such as BERT and its better variant RoBERTa, has been a common practice for advancing performance in natural language understanding (NLU) tasks. Recent advance in representation learning shows that isotropic (i.e., unit-variance and uncorrelated) embeddings can significantly improve performance on downstream tasks with faster convergence and better generalization. The isotropy of the pre-trained embeddings in PTLMs, however, is relatively under-explored. In this paper, we analyze the isotropy of the pre-trained [CLS] embeddings of PTLMs with straightforward visualization, and point out two major issues: high variance in their standard deviation, and high correlation between different dimensions. We also propose a new network regularization method, isotropic batch normalization (IsoBN) to address the issues, towards learning more isotropic representations in fine-tuning by dynamically penalizing dominating principal components. This simple yet effective fine-tuning method yields about 1.0 absolute increment on the average of seven NLU tasks.
翻译:微调培训前语言模型(PTLM),如BERT及其更好的变式RoBERTA等,是提高自然语言理解(NLU)任务绩效的常见做法。最近的代表性学习进展表明,异热带(即单位差异和与非相关)嵌入可以大大改善下游任务的业绩,更快地趋同和更好地概括。但是,在PTLMS中预先培训的嵌入的偏移度相对较低。在本文中,我们用直观的可视化方式分析了预先培训的PTLM[CLS]嵌入的偏移度,并指出了两大问题:标准偏差很大,不同层面之间高度相关。我们还提出了一个新的网络规范化方法,即分级正常化(IsoBN),以解决问题,通过动态地惩罚主要组成部分,在微调中学习更多的偏移表示。这一简单而有效的微调方法在7个NLU任务的平均水平上产生大约1.0绝对增量。