Sparse regression has recently been applied to enable transfer learning from very limited data. We study an extension of this approach to unsupervised learning -- in particular, learning word embeddings from unstructured text corpora using low-rank matrix factorization. Intuitively, when transferring word embeddings to a new domain, we expect that the embeddings change for only a small number of words -- e.g., the ones with novel meanings in that domain. We propose a novel group-sparse penalty that exploits this sparsity to perform transfer learning when there is very little text data available in the target domain -- e.g., a single article of text. We prove generalization bounds for our algorithm. Furthermore, we empirically evaluate its effectiveness, both in terms of prediction accuracy in downstream tasks as well as the interpretability of the results.
翻译:最近,为了能够从非常有限的数据中进行转移学习,我们采用了粗略的回归法。我们研究了这一方法的延伸,将其推广到不受监督的学习中,特别是利用低级矩阵因子化,从结构化的文本公司中学习文字嵌入。在将文字嵌入到新领域时,我们直觉地认为,嵌入的词只改变少数几个词,例如,在这方面具有新含义的词。我们提议了一个新颖的集团粗略惩罚,在目标领域几乎没有可用文字数据时,利用这种模糊性来进行转移学习,例如,文本的单一一篇文章。我们证明了我们算法的概括性界限。此外,我们从预测下游任务的准确性以及结果的可解释性的角度对它的有效性进行了实验性评估。