Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
ronniecao commited on
Commit
4753adb
1 Parent(s): 0eef522

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +191 -3
README.md CHANGED
@@ -1,3 +1,191 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # Codev-Bench
6
+
7
+ ## Introduction
8
+ Codev-Bench (Code Development Benchmark), a fine-grained, real-world, repository-level, and developer-centric evaluation framework. Codev-Bench assesses whether a code completion tool can accurately capture a developer's immediate intent and suggest appropriate code snippets across diverse, fine-grained contexts.
9
+
10
+ In daily IDE-based coding development scenarios, the user's real-time autocompletion needs are diverse. They not only include generating function according to comments but also encompass sub-scenes such as contextual completion for logical blocks, completion of function parameter lists, and completion of ordinary statements. Previous code generatino or completion benchmarks only focus on generating entire function according to comments, for example, [HumanEval](https://github.com/openai/human-eval), [MBPP](https://huggingface.co/datasets/google-research-datasets/mbpp), [ClassEval](https://github.com/FudanSELab/ClassEval), [LiveCodeBench](https://github.com/LiveCodeBench/LiveCodeBench), [EvoCodeBench](https://github.com/seketeam/EvoCodeBench), etc..
11
+
12
+ To better align with real user development scenarios, we propose Codev-Bench. It not only reproduces the diverse sub-scenes that users may encounter during their development process but also constructs unit tests-based evaluation method to more accurately assess the code quality generated by various LLMs.
13
+
14
+
15
+ ## Methodology
16
+ In detail, first, We extract unit test classes and functions from real GitHub repositories and, complete the installation of environment dependencies and execute the unit tests with the assistance of GPT-4. At the same time, we deploy [pytest trace](https://docs.pytest.org/en/stable/) tool to extact the execution traces of unit tests to figure out the target functions related to each unit test. Finally, [tree-sitter](https://tree-sitter.github.io/tree-sitter/) is used to parse the AST (Abstract Syntax Tree) of the target functions, thus all the sub-functions, comments, logical blocks, statements, etc. can be recognized.
17
+
18
+ We split the completion sub-scenes or capabilities that users may encounter while developing in an IDE into the following parts:
19
+
20
+ > **Scene1**. ✅ Completing complete code blocks (including functions, conditional logic blocks, loop logic blocks, comments, ordinary statements, etc.).
21
+
22
+ - **Scene1.1**. ✅ The context of the code block to be completed is fully complete.
23
+
24
+ - **Scene1.2**. ✅ The context of the code block to be completed has an empty body, but the external context of the function is complete.
25
+
26
+ - **Scene1.3**. ✅ The context following the code block to be completed is completely empty.
27
+
28
+ > **Scene2**. ✅ Completing a portion of code within a specific code block.
29
+
30
+ - **Scene2.1**. ✅ Complete a portion of code within the code block.
31
+
32
+ - **Scene2.2**. ✅ The code block is already complete and should not have any code added.
33
+
34
+ > **Scene3**. 🔄 Completing code based on classes and functions defined in other files.
35
+
36
+ > **Scene4**. 🔄 Completing code based on related and similar code within the project.
37
+
38
+
39
+ ## How To Use
40
+
41
+ ### Data
42
+
43
+ Reseachers and developers can download the source Github repositories [Source_Code.tar.gz](https://huggingface.co/datasets/TongyiLingma/CodevBench/resolve/main/Source_Code.tar.gz?download=true) and its copy version [Source_Code_Copy.tar.gz](https://huggingface.co/datasets/TongyiLingma/CodevBench/resolve/main/Source_Code_Copy.tar.gz?download=true). These repositories are obtained by [EvoCodeBench](https://github.com/seketeam/EvoCodeBench) and are created between Dec 2023 to Feb 2024. In the future, we will continuously crawl and analyze new repositories as the source repositories for evaluation.
44
+
45
+ All the file can be download as follows:
46
+ ```
47
+ cd CodevBench
48
+
49
+ # download the code of source repositories
50
+ wget "https://huggingface.co/datasets/TongyiLingma/CodevBench/resolve/main/Source_Code.tar.gz?download=true" -O Source_Code.tar.gz
51
+ tar -zxvf Source_Code.tar.gz
52
+
53
+ # download the copy version of source repositories
54
+ wget "https://huggingface.co/datasets/TongyiLingma/CodevBench/resolve/main/Source_Code_Copy.tar.gz?download=true" -O Source_Code_Copy.tar.gz
55
+ tar -zxvf Source_Code_Copy.tar.gz
56
+
57
+ # download repositories' metadata (e.g. unit test paths, functions, target blocks, etc.)
58
+ wget "https://huggingface.co/datasets/TongyiLingma/CodevBench/resolve/main/metadatas.tar.gz?download=true" -O metadatas.tar.gz
59
+ tar -zxvf metadatas.tar.gz
60
+
61
+ # download the prompt of each completion question
62
+ wget "https://huggingface.co/datasets/TongyiLingma/CodevBench/resolve/main/prompts.tar.gz?download=true" -O prompts.tar.gz
63
+ tar -zxvf prompts.tar.gz
64
+
65
+ # download the predicted response of each LLMs and Code LLMs
66
+ wget "https://huggingface.co/datasets/TongyiLingma/CodevBench/resolve/main/predicts.tar.gz?download=true" -O predicts.tar.gz
67
+ tar -zxvf predicts.tar.gz
68
+ ```
69
+
70
+ ### Installation
71
+
72
+ We recommend reseachers and developers to use conda to create a virtual environment.
73
+
74
+ ```
75
+ cd CodevBench
76
+ python3.10 -m venv myenv && source myenv/bin/activate
77
+ pip install pytest pandas tqdm fuzzywuzzy
78
+ ```
79
+
80
+ Then, reseachers and developers can build the environment by running the following command.
81
+
82
+ ```
83
+ bash create_env.sh
84
+ ```
85
+
86
+ It will cost a few hours to build the execution environment.
87
+
88
+ ### Validation
89
+
90
+ To validate whether the unit tests of each repository are executed successfully, you can run the following command.
91
+ ```
92
+ myenv/bin/python src/prepare.py --method retest_block_unit_test --mode prefix_suffix_full_complete_current_block_no_evidence
93
+ ```
94
+ If almost all the unit tests run successfully, reseachers and developers can proceed to the subsequent steps of calling the model for predictions and evaluations.
95
+
96
+ ### Prompts
97
+
98
+ We split the completion sub-scenes or capabilities as follows:
99
+
100
+ **Scene1.1**: `./prompts/prefix_suffix_full_complete_current_block_no_evidence.jsonl` and `./prompts/prefix_suffix_full_complete_current_block_with_evidence.jsonl`
101
+
102
+ **Scene1.2**: `./prompts/prefix_full_suffix_func_empty_complete_current_block_no_evidence.jsonl` and `./prompts/prefix_full_suffix_func_empty_complete_current_block_with_evidence.jsonl`
103
+
104
+ **Scene1.3**: `./prompts/prefix_full_suffix_empty_complete_current_block_no_evidence.jsonl` and `./prompts/prefix_full_suffix_empty_complete_current_block_with_evidence.jsonl`
105
+
106
+ **Scene2.1**: `./prompts/complete_current_header_inner_block_completion.jsonl`
107
+
108
+ **Scene2.2**: `./prompts/complete_current_header_empty_completion.jsonl`
109
+
110
+ **Scene3**: Look forward to it.
111
+
112
+ **Scene4**: Look forward to it.
113
+
114
+ The structure of the prompts is as follows:
115
+ ```
116
+ {
117
+ "func_name": "function file path and line position",
118
+ "item_dids": [
119
+ "unit test ids"
120
+ ],
121
+ "unit_test_ids": [
122
+ "unit test ids"
123
+ ],
124
+ "block_key": "target code block file path and line position",
125
+ "block_type": "AST type of block",
126
+ "prompt": "<filename>xxx<fim_prefix>xxx<fim_suffix>xxx<fim_middle>xxx",
127
+ "prefix": "prefix context of target code block",
128
+ "suffix": "suffix context of target code block",
129
+ "middle": "ground truth of target code block",
130
+ "test_prefix": "prefix context of to construct the unit test",
131
+ "test_suffix": "suffix context of to construct the unit test",
132
+ "test_middle": "ground truth of target code block to construct the unit test",
133
+ }
134
+ ```
135
+
136
+ ### Predictions
137
+
138
+ We provide the prefix context and suffix context in the prompt, thus users can call different model (general LLMs or code LLMs) to predict the completion of the target code block.
139
+
140
+ For general LLMs, we provide the natural language version prompt template in `./src/templates/llm_template.py`, users can use this template to construct final prompt and call the model.
141
+
142
+ For code LLMs, users should construct the prompt according to the Fill-In-Middle template for the corresponding code LLMs and call the model. We also provide some calling examples in `/mnt/coai_nas/qianhu/github/completion_benchmark/src/request_model.py`.
143
+
144
+ The predicted responses are as follows:
145
+ ```
146
+ {
147
+ "func_name": "function file path and line position",
148
+ "item_dids": [
149
+ "unit test ids"
150
+ ],
151
+ "unit_test_ids": [
152
+ "unit test ids"
153
+ ],
154
+ "block_key": "target code block file path and line position",
155
+ "block_type": "AST type of block",
156
+ "prompt": "<filename>xxx<fim_prefix>xxx<fim_suffix>xxx<fim_middle>xxx",
157
+ "prefix": "prefix context of target code block",
158
+ "suffix": "suffix context of target code block",
159
+ "middle": "ground truth of target code block",
160
+ "test_prefix": "prefix context of to construct the unit test",
161
+ "test_suffix": "suffix context of to construct the unit test",
162
+ "test_middle": "ground truth of target code block to construct the unit test",
163
+ "response_original_text": "original response of the model",
164
+ "response": "the parsed final target code for model to complete"
165
+ }
166
+ ```
167
+
168
+ We provide some examples in `./predicts/prefix_suffix_full_complete_current_block_no_evidence/predictions/`.
169
+
170
+ ### Evaluation
171
+
172
+ The final step is to fill the predicted code into the cursor position and run the corresponding unit tests.
173
+
174
+ After calling model and obtaining the predicted responses, we can run the following command to run the unit test:
175
+ ```
176
+ myenv/bin/python src/evaluate.py --method evaluate_prediction --model codegemma_7b --mode prefix_suffix_full_complete_current_block_no_evidence --check-unittest
177
+ ```
178
+
179
+ Thus, the result file `./predicts/prefix_suffix_full_complete_current_block_no_evidence/results/codegemma_7b.jsonl.x` will be generated. Then, users can use the following command to summarize the results:
180
+ ```
181
+ myenv/bin/python src/evaluate.py --method print_scores --model codegemma_7b
182
+ ```
183
+
184
+
185
+ ## Experimental Results
186
+
187
+ ### Scene1.1
188
+
189
+ We evaluate some popular general LLMs and code LLMs on the sub dataset **Scene1.1** of the CodevBench dataset. The results are as follows:
190
+
191
+ ![results of Scene1.1](images/prefix_suffix_full_complete_current_block_no_evidence.png)