Update README.md
Browse files
README.md
CHANGED
@@ -37,15 +37,15 @@ Check the examples in the evaluation section to get an idea of its performance.
|
|
37 |
|
38 |
## ⚡ Quantized models
|
39 |
|
40 |
-
Thanks to [
|
41 |
|
42 |
-
* **GGUF**: https://huggingface.co/
|
43 |
* **EXL2**: https://huggingface.co/elinas/Meta-Llama-3-120B-Instruct-4.0bpw-exl2
|
44 |
* **mlx**: https://huggingface.co/mlx-community/Meta-Llama-3-120B-Instruct-4bit
|
45 |
|
46 |
## 🏆 Evaluation
|
47 |
|
48 |
-
This model is great for creative writing but struggles in other tasks. I'd say use it with caution and don't expect it to outperform GPT-4 outside of some specific use cases.
|
49 |
|
50 |
* **X thread by Eric Hartford (creative writing)**: https://twitter.com/erhartford/status/1787050962114207886
|
51 |
* **X thread by Daniel Kaiser (creative writing)**: https://twitter.com/spectate_or/status/1787257261309518101
|
|
|
37 |
|
38 |
## ⚡ Quantized models
|
39 |
|
40 |
+
Thanks to [Bartowski](https://huggingface.co/ehartford), [elinas](https://huggingface.co/elinas), the [mlx-community](https://huggingface.co/mlx-community) and others for providing these models.
|
41 |
|
42 |
+
* **GGUF**: https://huggingface.co/lmstudio-community/Meta-Llama-3-120B-Instruct-GGUF
|
43 |
* **EXL2**: https://huggingface.co/elinas/Meta-Llama-3-120B-Instruct-4.0bpw-exl2
|
44 |
* **mlx**: https://huggingface.co/mlx-community/Meta-Llama-3-120B-Instruct-4bit
|
45 |
|
46 |
## 🏆 Evaluation
|
47 |
|
48 |
+
This model is great for creative writing but struggles in other tasks. I'd say use it with caution and don't expect it to outperform GPT-4 outside of some very specific use cases.
|
49 |
|
50 |
* **X thread by Eric Hartford (creative writing)**: https://twitter.com/erhartford/status/1787050962114207886
|
51 |
* **X thread by Daniel Kaiser (creative writing)**: https://twitter.com/spectate_or/status/1787257261309518101
|