Update README.md
Browse files
README.md
CHANGED
@@ -195,11 +195,12 @@ print("Response:", response)
|
|
195 |
|
196 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
197 |
|
|
|
|
|
198 |
# Notice:
|
199 |
|
200 |
-
- **The problem with runnning evals is that they won't make use of the correct template and it won't be a true eval then would it? so these
|
201 |
|
202 |
-
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
203 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
204 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Lyte__Llama-3.2-3B-Overthinker)
|
205 |
|
|
|
195 |
|
196 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
197 |
|
198 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
199 |
+
|
200 |
# Notice:
|
201 |
|
202 |
+
- **The problem with runnning evals is that they won't make use of the correct template and it won't be a true eval then would it? so these barely test the model.**
|
203 |
|
|
|
204 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
205 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Lyte__Llama-3.2-3B-Overthinker)
|
206 |
|