The rapid development of large language models (LLMs) has revolutionized software testing, particularly fuzz testing, by automating the generation of diverse and effective test inputs. This advancement holds great promise for improving software reliability. Meanwhile, the introduction of MOJO, a high-performance AI programming language blending Python's usability with the efficiency of C and C++, presents new opportunities to enhance AI model scalability and programmability. However, as a new language, MOJO lacks comprehensive testing frameworks and a sufficient corpus for LLM-based testing, which exacerbates model hallucination. In this case, LLMs will generate syntactically valid but semantically incorrect code, significantly reducing the effectiveness of fuzz testing. To address this challenge, we propose MOJOFuzzer, the first adaptive LLM-based fuzzing framework designed for zero-shot learning environments of emerging programming languages. MOJOFuzzer integrates a mutil-phase framework that systematically eliminates low-quality generated inputs before execution, significantly improving test case validity. Furthermore, MOJOFuzzer dynamically adapts LLM prompts based on runtime feedback for test case mutation, enabling an iterative learning process that continuously enhances fuzzing efficiency and bug detection performance. Our experimental results demonstrate that MOJOFuzzer significantly enhances test validity, API coverage, and bug detection performance, outperforming traditional fuzz testing and state-of-the-art LLM-based fuzzing approaches. Using MOJOFuzzer, we have conducted a first large-scale fuzz testing evaluation of MOJO, uncorvering 13 previous unknown bugs. This study not only advances the field of LLM-driven software testing but also establishes a foundational methodology for leveraging LLMs in the testing of emerging programming languages.
翻译:大型语言模型(LLMs)的快速发展通过自动化生成多样且有效的测试输入,彻底改变了软件测试领域,特别是模糊测试。这一进展为提升软件可靠性带来了巨大希望。与此同时,MOJO作为一种融合了Python易用性与C/C++效率的高性能AI编程语言的推出,为增强AI模型的可扩展性和可编程性提供了新的机遇。然而,作为一种新兴语言,MOJO缺乏完善的测试框架以及用于基于LLM测试的充足语料库,这加剧了模型幻觉问题。在这种情况下,LLMs会生成语法有效但语义错误的代码,从而显著降低模糊测试的有效性。为应对这一挑战,我们提出了MOJOFuzzer,这是首个为新兴编程语言的零样本学习环境设计的自适应基于LLM的模糊测试框架。MOJOFuzzer集成了一个多阶段框架,能够在执行前系统性地剔除低质量生成的输入,显著提升测试用例的有效性。此外,MOJOFuzzer根据运行时反馈动态调整用于测试用例变异的LLM提示,实现了一个持续提升模糊测试效率和缺陷检测性能的迭代学习过程。我们的实验结果表明,MOJOFuzzer显著提升了测试有效性、API覆盖率和缺陷检测性能,超越了传统模糊测试以及最先进的基于LLM的模糊测试方法。利用MOJOFuzzer,我们首次对MOJO进行了大规模模糊测试评估,发现了13个先前未知的缺陷。本研究不仅推动了LLM驱动的软件测试领域的发展,也为在新兴编程语言测试中利用LLM建立了一套基础方法论。