Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ For more details, please refer to [our blog post](https://note.com/elyza/n/n360b
|
|
23 |
|
24 |
We have prepared two quantized model options, GGUF and AWQ. This is the GGUF (Q4_K_M) model, converted using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
25 |
|
26 |
-
|
27 |
|
28 |
| Model | ELYZA-tasks-100 GPT4 score |
|
29 |
| :-------------------------------- | ---: |
|
|
|
23 |
|
24 |
We have prepared two quantized model options, GGUF and AWQ. This is the GGUF (Q4_K_M) model, converted using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
25 |
|
26 |
+
The following table shows the performance degradation due to quantization:
|
27 |
|
28 |
| Model | ELYZA-tasks-100 GPT4 score |
|
29 |
| :-------------------------------- | ---: |
|