CODET: CODE GENERATION WITH GENERATED TESTS

Abstract

The task of generating code solutions for a given programming problem can benefit from the use of pre-trained language models such as Codex, which can produce multiple diverse samples. However, a major challenge for this task is to select the most appropriate solution from the multiple samples generated by the pretrained language models. A natural way to evaluate the quality and correctness of a code solution is to run it against a set of test cases, but the manual creation of such test cases is often costly and time-consuming. In this paper, we propose a novel method, CODET, that leverages the same pre-trained language models to automatically generate test cases for the code samples, thus reducing the human effort and increasing the coverage of the test scenarios. CODET then executes the code samples using the generated test cases and performs a dual execution agreement, which considers both the consistency of the outputs against the generated test cases and the agreement of the outputs with other code samples. We conduct comprehensive experiments on four benchmarks, HumanEval, MBPP, APPS, and CodeContests, using five different pre-trained language models with varying sizes and capabilities. Our results show that CODET can significantly improve the performance of code solution selection over previous methods, achieving remarkable and consistent gains across different models and benchmarks. For instance, CODET improves the pass@1 metric on HumanEval to 65.8%, which represents an absolute improvement of 18.8% over the code-davinci-002 model, and an absolute improvement of more than 20% over the previous state-of-the-art results. * The first three authors contributed equally. 1 We report the results on the HumanEval benchmark with the Codex model code-cushman-001. More results with different models and benchmarks can be found in Section 4.1 and 4.2 2 https://github.com/features/copilot 

1. INTRODUCTION

Despite the remarkable progress in pre-training techniques for code generation, selecting a single correct solution from multiple candidates generated by large language models remains a hard problem. For instance, Codex (Chen et al., 2021) , a state-of-the-art pre-trained language model for code generation, can achieve a pass@100 (pass if one or more among 100 generated solutions for a given problem can pass the corresponding test cases) of 77.4%, but a pass@1 (correct rate of a single solution) of only 33.5% on the HumanEval benchmark (Chen et al., 2021) 1 . This huge gap limits the practical usefulness of code generation models and motivates us to explore how to pick the correct or best solution from multiple candidates. A straightforward way to verify the correctness of a solution is to execute it and check if it passes all corresponding test cases. This execution-guided approach has been widely adopted in various code-related tasks, such as code generation (Chen et al., 2021; Li et al., 2022b; Shi et al., 2022) , code translation (Roziere et al., 2021), and program synthesis (Chen et al., 2018; Ellis et al., 2019) . However, this approach relies heavily on the quality and quantity of test cases, which are often costly and time-consuming to create and maintain. Moreover, in real-world applications like Copilot 2 , a code generation tool that assists developers in writing code, it is unrealistic to expect users to provide test cases for every problem they want to solve. Therefore, we propose to automatically generate test cases for arbitrary programming problems and use them to quickly verify any solution.

Dual Execution Agreement

Figure 1: The illustration of CODET. Both the code solutions and the test cases are generated by the pre-trained language model. The best code solution is then selected by a dual execution agreement. In this paper, we propose CODET: CODE generation with generated Test-driven dual execution agreement, as illustrated in Figure 1 . First, we leverage the same pre-trained language model that generates code solutions, such as Codex, to generate a large number of test cases for each programming problem by providing an elaborate instruction as prompt. Next, we use a dual execution agreement approach inspired by the classical RANSAC algorithm (Fischler & Bolles, 1981) . We execute each generated code solution on each generated test case, and iteratively find multiple groups of code solution and test case pairs. Each group, or consensus set, has solutions that pass the same test cases, indicating that they have the same functionality, even if they are different in implementation. We expect that a solution that passes more test cases is more correct, and that a solution that has more similar solutions, i.e., solutions in the same consensus set, is more consistent with the problem specification. So, we rank each consensus set by both the number of test cases and solutions in it, and choose the best solution from the highest-ranked consensus set. Our method is simple and efficient, as it does not require any labelled data or additional rankers, but it achieves surprisingly exceptional performance. We evaluate our method on five different pre-trained language models for code generation: three OpenAI Codex models (Chen et al., 2021) , INCODER (Fried et al., 2022), and CODEGEN (Nijkamp et al., 2022) , as well as four established benchmarks for code generation: HumanEval (Chen et al., 2021) , MBPP (Austin et al., 2021) , APPS (Hendrycks et al., 2021), and CodeContests (Li et al., 2022b) . The experimental results show that our method can effectively select the correct solution from multiple candidates, improving the pass@1 score significantly on all benchmarks in the zero-shot setting. For instance, CODET achieves improvements using code-davinci-002: HumanEval (47.0% → 65.8%), MBPP (58.1% → 67.7%), APPS INTRODUCTORY (27.2% → 34.6%), and CodeContests (0.7% → 2.1%). Moreover, when we combine code-davinci-002, the most powerful pre-trained model, and CODET, we outperform previous state-of-the-art methods by a large margin, e.g., HumanEval: 42.7% (Inala et al., 2022) → 65.8%. We also conduct a thorough analysis to provide more insights. Our work is publicly available at https://github.com/microsoft/CodeT.

2. METHODOLOGY

The task of code generation is to solve a programming problem: generate code solution x based on context c. As shown in Figure 2 , context c contains natural language problem description in the form of code comment, and a code snippet that includes statements such as imports and the function header. A code solution is a code snippet that solves the programming problem described in the context. Generally, we sample a set of code solutions, denoted as X = {x 1 , x 2 , • • •, x N }, based on the context c using a pre-trained language model M, which can be formulated as X = M(c). Our goal is to select the best code solution x from the set of generated code solutions X, where x is the most likely solution to correctly solve the given programming problem. To this end, we propose CODET in the hope of unleashing the inherent power of the pre-trained language model M. Specifically, we use M to generate test cases for the programming problem (Section 2.1), and then select the best code solution x based on a dual execution agreement (Section 2.2).

2.1. TEST CASE GENERATION

Besides generating code solutions, we also need to generate test cases to evaluate the correctness of the code solutions. A test case is a pair of input and expected output for the function defined in the

availability

Code Solution 1 Code Solution 2 …… Test Case 1 Test Case 2 …… Code Generation +Instruction A Programming Problem The Best Code Solution

