CyberTimon
commited on
Commit
•
921b8c2
1
Parent(s):
7c84d30
Update README.md
Browse files
README.md
CHANGED
@@ -8,4 +8,4 @@ Converted with `python llama.py ./chimera-7b c4 --wbits 4 --true-sequential --gr
|
|
8 |
|
9 |
It uses groupsize 128. Doesn't use act-order and got quantized with the oobabooga gpt-q branch so it works there.
|
10 |
|
11 |
-
Anyone need a 13b version?
|
|
|
8 |
|
9 |
It uses groupsize 128. Doesn't use act-order and got quantized with the oobabooga gpt-q branch so it works there.
|
10 |
|
11 |
+
Anyone need a 13b version? (Edit: Can't do it right now as I only get out of memory errors while quantizing.)
|