Update README.md
Browse files
README.md
CHANGED
@@ -29,9 +29,20 @@ configs:
|
|
29 |
path: data/train-*
|
30 |
---
|
31 |
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
# How to use
|
34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
```python
|
36 |
import torch
|
37 |
import jsonlines
|
@@ -137,5 +148,9 @@ evaluate_functional_correctness(
|
|
137 |
problem_file=problem_dict,
|
138 |
)
|
139 |
|
140 |
-
|
141 |
```
|
|
|
|
|
|
|
|
|
|
|
|
29 |
path: data/train-*
|
30 |
---
|
31 |
|
32 |
+
# Evaluation summary
|
33 |
+
|
34 |
+
We introduce HumanEval for Kotlin, created from scratch by human experts.
|
35 |
+
All HumanEval solutions and tests are written by an expert olympiad programmer with 6 years experience in Kotlin, and independently checked by a programmer with 4 years experience in Kotlin.
|
36 |
+
The tests we implement are eqivalent to the original HumanEval tests for Python, and we fix the prompt signatures to address the generic variable signature we describe above.
|
37 |
|
38 |
# How to use
|
39 |
|
40 |
+
The evaluation presented as dataset which is prepared in a format suitable for MXEval and can be easily integrated into the MXEval pipeline.
|
41 |
+
|
42 |
+
During the code generation step, we use early stopping on the `}\n}` sequence to expedite the process. We also perform some code post-processing before evaluation—specifically, we remove all comments and signatures.
|
43 |
+
|
44 |
+
The early stopping method, post-processing steps, and evaluation code are available in the example below.
|
45 |
+
|
46 |
```python
|
47 |
import torch
|
48 |
import jsonlines
|
|
|
148 |
problem_file=problem_dict,
|
149 |
)
|
150 |
|
|
|
151 |
```
|
152 |
+
|
153 |
+
|
154 |
+
# Results:
|
155 |
+
|
156 |
+
We evaluated multiple coding models using this benchmark, and the results are presented in the table below.
|