More fixes.
#17
by
Lewdiculous
- opened
README.md
CHANGED
@@ -15,9 +15,7 @@ tags:
|
|
15 |
|
16 |
> [!WARNING]
|
17 |
> **Warning:** <br>
|
18 |
-
>
|
19 |
-
>
|
20 |
-
> Now testing.
|
21 |
|
22 |
Pull Requests with your own features and improvements to this script are always welcome.
|
23 |
|
|
|
15 |
|
16 |
> [!WARNING]
|
17 |
> **Warning:** <br>
|
18 |
+
> For **Llama-3** models, at the moment, you have to use `gguf-imat-llama-3.py` and replace the config files with the ones in the [**llama-3-config-files**](https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script/tree/main/extra-files/llama-3-config-files) folder for properly quanting and generating the imatrix data.
|
|
|
|
|
19 |
|
20 |
Pull Requests with your own features and improvements to this script are always welcome.
|
21 |
|