Update README.md
Browse files
README.md
CHANGED
@@ -29,7 +29,7 @@ This model is based on [deepseek-coder-1.3b-base](https://huggingface.co/deepsee
|
|
29 |
|
30 |
## Benchmark Scores
|
31 |
|
32 |
-
The
|
33 |
|
34 |
| **Benchmark** | **HumanEval (+)** | **MBPP (+)** | **Average (+)** |
|
35 |
|---------------|-------------------|--------------|-----------------|
|
|
|
29 |
|
30 |
## Benchmark Scores
|
31 |
|
32 |
+
The OpenCodeInterpreter Models series exemplifies the evolution of coding model performance, particularly highlighting the significant enhancements brought about by the integration of execution feedback. In an effort to quantify these improvements, we present a detailed comparison across two critical benchmarks: HumanEval and MBPP. This comparison not only showcases the individual performance metrics on each benchmark but also provides an aggregated view of the overall performance enhancement. The subsequent table succinctly encapsulates the performance data, offering a clear perspective on how execution feedback contributes to elevating the models' capabilities in code interpretation and execution tasks.
|
33 |
|
34 |
| **Benchmark** | **HumanEval (+)** | **MBPP (+)** | **Average (+)** |
|
35 |
|---------------|-------------------|--------------|-----------------|
|