brucethemoose
commited on
Commit
•
63ca10e
1
Parent(s):
88bcc86
Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ pipeline_tag: text-generation
|
|
12 |
|
13 |
Qantized with exllamav2 on 200 rows (400K tokens) on a long Orca-Vicuna format chat, a sci fi story and a fantasy story. This should hopefully yield better chat performance than the default wikitext quantization.
|
14 |
|
15 |
-
4bpw is enough for **~47K context on a 24GB GPU.**. I would highly recommend running in exui for speed at long context.
|
16 |
|
17 |
***
|
18 |
|
|
|
12 |
|
13 |
Qantized with exllamav2 on 200 rows (400K tokens) on a long Orca-Vicuna format chat, a sci fi story and a fantasy story. This should hopefully yield better chat performance than the default wikitext quantization.
|
14 |
|
15 |
+
4bpw is enough for **~47K context on a 24GB GPU.**. I would highly recommend running in exui for speed at long context. I go into more detail in this [Reddit post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/)
|
16 |
|
17 |
***
|
18 |
|