Code-switching is still an understudied phenomenon in natural language processing mainly because of two related challenges: it lacks annotated data, and it combines a vast diversity of low-resource languages. Despite the language diversity, many code-switching scenarios occur in language pairs, and English is often a common factor among them. In the first part of this paper, we use transfer learning from English to English-paired code-switched languages for the language identification (LID) task by applying two simple yet effective techniques: 1) a hierarchical attention mechanism that enhances morphological clues from character n-grams, and 2) a secondary loss that forces the model to learn n-gram representations that are particular to the languages involved. We use the bottom layers of the ELMo architecture to learn these morphological clues by essentially recognizing what is and what is not English. Our approach outperforms the previous state of the art on Nepali-English, Spanish-English, and Hindi-English datasets. In the second part of the paper, we use our best LID models for the tasks of Spanish-English named entity recognition and Hindi-English part-of-speech tagging by replacing their inference layers and retraining them. We show that our retrained models are capable of using the code-switching information on both tasks to outperform models that do not have such knowledge.
翻译:代码转换在自然语言处理中仍然是一个研究不足的现象,主要由于两个相关的挑战:它缺乏附加说明的数据,而且它结合了大量各种低资源语言。尽管语言的多样性,但许多代码转换情景都发生在语言配对中,而且英语往往是其中的一个共同因素。在本文第一部分,我们使用两种简单而有效的技术,将学习从英语转移到英语,将语言识别(LID)任务中的语调调转换为英语,使用两种简单但有效的技术:1)一个等级关注机制,加强字符 n克的形态学线索;2)二级损失,迫使模型学习与所涉语言特别相关的正文表达方式。我们使用ELMO结构的底层来学习这些形态学线索,基本上承认什么是非英语。我们的方法超越了尼泊尔语、英语和印地语-英语的艺术现状。在论文的第二部分,我们使用我们最好的LID模型来学习西班牙语-英语名称的实体识别和英语再培训模式,我们用这种标准化模型来取代了这些实体识别和印地语再培训模式。