This report contains the details regarding our submission to the OffensEval 2019 (SemEval 2019 - Task 6). The competition was based on the Offensive Language Identification Dataset. We first discuss the details of the classifier implemented and the type of input data used and pre-processing performed. We then move onto critically evaluating our performance. We have achieved a macro-average F1-score of 0.76, 0.68, 0.54, respectively for Task a, Task b, and Task c, which we believe reflects on the level of sophistication of the models implemented. Finally, we will be discussing the difficulties encountered and possible improvements for the future.
翻译:本报告载有我们提交2019年秋天大会(SemEval 2019-Twork 6)的详情,这次竞争是根据进攻性语言识别数据集进行的,我们首先讨论所执行分类程序的细节以及所使用的投入数据和预处理方法的类型,然后进行批判性评价,我们分别取得了0.76、0.68、0.54的宏观平均F-1核心任务(a)任务(b)和(c)任务(c),我们认为这反映了所执行的模式的复杂程度,最后,我们将讨论遇到的困难和未来可能的改进。