Evaluation metrics play a vital role in the growth of an area as it defines the standard of distinguishing between good and bad models. In the area of code synthesis, the commonly used evaluation metric is BLEU or perfect accuracy, but they are not suitable enough to evaluate codes, because BLEU is originally designed to evaluate the natural language, neglecting important syntactic and semantic features of codes, and perfect accuracy is too strict thus it underestimates different outputs with the same semantic logic. To remedy this, we introduce a new automatic evaluation metric, dubbed CodeBLEU. It absorbs the strength of BLEU in the n-gram match and further injects code syntax via abstract syntax trees (AST) and code semantics via data-flow. We conduct experiments by evaluating the correlation coefficient between CodeBLEU and quality scores assigned by the programmers on three code synthesis tasks, i.e., text-to-code, code translation, and code refinement. Experimental results show that our proposed CodeBLEU can achieve a better correlation with programmer assigned scores compared with BLEU and accuracy.
翻译:在代码合成领域,常用的评价指标是BLEU或完全准确性,但不足以评价代码,因为BLEU最初的设计是为了评价自然语言,忽视了守则的重要综合和语义特征,完全准确性过于严格,从而低估了具有相同语义逻辑的不同产出。为了纠正这一点,我们采用了一个新的自动评价标准,称为CoblebleU。它吸收了正克匹配中BLEU的强度,并通过抽象的合成树和代码语义通过数据流进一步输入编码语义。我们通过评估代码U与程序员在三种代码合成任务(即文本对代码、代码翻译和代码完善)上指定的质量分数之间的相关系数,进行实验,评估代码U与程序员分配的分数与BLEU和准确性之间的相关系数。实验结果表明,我们提议的代码CUBYU能够与程序员分配的分数比BLEU和准确性实现更好的关联性。