Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ We use [OpenChat](https://huggingface.co/openchat) packing, trained with [Axolot
|
|
24 |
This release is trained on a curated filtered subset of most of our GPT-4 augmented data.
|
25 |
It is the same subset of our data as was used in our [OpenOrcaxOpenChat-Preview2-13B model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B).
|
26 |
|
27 |
-
HF Leaderboard evals place this model as #2 for all models smaller than 30B at release time, outperforming all but one 13B model
|
28 |
|
29 |
This release provides a first: a fully open model with class-breaking performance, capable of running fully accelerated on even moderate consumer GPUs.
|
30 |
Our thanks to the Mistral team for leading the way here.
|
|
|
24 |
This release is trained on a curated filtered subset of most of our GPT-4 augmented data.
|
25 |
It is the same subset of our data as was used in our [OpenOrcaxOpenChat-Preview2-13B model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B).
|
26 |
|
27 |
+
**HF Leaderboard evals place this model as #2 for all models smaller than 30B at release time, outperforming all but one 13B model.**
|
28 |
|
29 |
This release provides a first: a fully open model with class-breaking performance, capable of running fully accelerated on even moderate consumer GPUs.
|
30 |
Our thanks to the Mistral team for leading the way here.
|