Large language models (LLMs) have demonstrated remarkable capabilities in code generation tasks. However, their effectiveness heavily relies on supervised training with extensive labeled (e.g., question-answering pairs) or unlabeled datasets (e.g., code snippets), which are often expensive and difficult to obtain at scale. To address this limitation, this paper introduces a method IPC, an unsupervised framework that leverages Internal Probing of LLMs for Code generation without any external corpus, even unlabeled code snippets. We introduce the problem space probing, test understanding probing, solution space probing, and knowledge consolidation and reinforcement to probe the internal knowledge and confidence patterns existing in LLMs. Further, IPC identifies reliable code candidates through self-consistency mechanisms and representation-based quality estimation to train UCoder (coder with unsupervised learning). We validate the proposed approach across multiple code benchmarks, demonstrating that unsupervised methods can achieve competitive performance compared to supervised approaches while significantly reducing the dependency on labeled data and computational resources. Analytic experiments reveal that internal model states contain rich signals about code quality and correctness, and that properly harnessing these signals enables effective unsupervised learning for code generation tasks, opening new directions for training code LLMs in resource-constrained scenarios.
翻译:大型语言模型(LLMs)在代码生成任务中展现出卓越能力。然而,其性能高度依赖于使用大规模标注数据(如问答对)或未标注数据集(如代码片段)进行监督训练,这些数据通常成本高昂且难以大规模获取。为突破这一限制,本文提出IPC方法——一种通过内部探测LLMs实现代码生成的无监督框架,无需任何外部语料(包括未标注代码片段)。我们引入问题空间探测、测试理解探测、解决方案空间探测以及知识巩固与强化机制,以挖掘LLMs内部存在的知识结构与置信度模式。进一步地,IPC通过自洽性机制与基于表征的质量评估来筛选可靠代码候选样本,用以训练UCoder(基于无监督学习的代码生成器)。我们在多个代码基准测试中验证了所提方法,结果表明无监督方法能够达到与监督方法相竞争的性能,同时显著降低对标注数据和计算资源的依赖。分析实验揭示:模型内部状态蕴含丰富的代码质量与正确性信号,有效利用这些信号能够为代码生成任务实现高效的无监督学习,为资源受限场景下训练代码LLMs开辟了新方向。