Update README.md
Browse files
README.md
CHANGED
@@ -60,11 +60,11 @@ To simplify the comparison, we chosed the Pass@1 metric for the Python language,
|
|
60 |
| Model | HumanEval python pass@1 |
|
61 |
| --- |----------------------------------------------------------------------------- |
|
62 |
| CodeLlama-7b-hf | 30.5%|
|
63 |
-
| opencsg-CodeLlama-7b-v0.1(4k) | **
|
64 |
| CodeLlama-13b-hf | 36.0%|
|
65 |
-
| opencsg-CodeLlama-13b-v0.1(4k) | **
|
66 |
| CodeLlama-34b-hf | 48.2%|
|
67 |
-
| opencsg-CodeLlama-34b-v0.1(4k)| **
|
68 |
|
69 |
**TODO**
|
70 |
- We will provide more benchmark scores on fine-tuned models in the future.
|
@@ -185,11 +185,11 @@ HumanEval 是评估模型在代码生成方面性能的最常见的基准,尤
|
|
185 |
| 模型 | HumanEval python pass@1 |
|
186 |
| --- |----------------------------------------------------------------------------- |
|
187 |
| CodeLlama-7b-hf | 30.5%|
|
188 |
-
| opencsg-CodeLlama-7b-v0.1(4k) | **
|
189 |
| CodeLlama-13b-hf | 36.0%|
|
190 |
-
| opencsg-CodeLlama-13b-v0.1(4k) | **
|
191 |
| CodeLlama-34b-hf | 48.2%|
|
192 |
-
| opencsg-CodeLlama-34b-v0.1(4k)| **
|
193 |
|
194 |
**TODO**
|
195 |
- 未来我们将提供更多微调模型的在各基准上的分数。
|
|
|
60 |
| Model | HumanEval python pass@1 |
|
61 |
| --- |----------------------------------------------------------------------------- |
|
62 |
| CodeLlama-7b-hf | 30.5%|
|
63 |
+
| opencsg-CodeLlama-7b-v0.1(4k) | **43.9%** |
|
64 |
| CodeLlama-13b-hf | 36.0%|
|
65 |
+
| opencsg-CodeLlama-13b-v0.1(4k) | **51.2%** |
|
66 |
| CodeLlama-34b-hf | 48.2%|
|
67 |
+
| opencsg-CodeLlama-34b-v0.1(4k)| **56.1%** |
|
68 |
|
69 |
**TODO**
|
70 |
- We will provide more benchmark scores on fine-tuned models in the future.
|
|
|
185 |
| 模型 | HumanEval python pass@1 |
|
186 |
| --- |----------------------------------------------------------------------------- |
|
187 |
| CodeLlama-7b-hf | 30.5%|
|
188 |
+
| opencsg-CodeLlama-7b-v0.1(4k) | **43.9%** |
|
189 |
| CodeLlama-13b-hf | 36.0%|
|
190 |
+
| opencsg-CodeLlama-13b-v0.1(4k) | **51.2%** |
|
191 |
| CodeLlama-34b-hf | 48.2%|
|
192 |
+
| opencsg-CodeLlama-34b-v0.1(4k)| **56.1%** |
|
193 |
|
194 |
**TODO**
|
195 |
- 未来我们将提供更多微调模型的在各基准上的分数。
|