TehVenom commited on
Commit
ee028d3
1 Parent(s): 00d5197

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -26,7 +26,7 @@ The current Metharme-13b has been trained as a LoRA, then merged down to the bas
26
 
27
  It has also been quantized down to 4Bit using the GPTQ library available here: https://github.com/0cc4m/GPTQ-for-LLaMa
28
  ```
29
- python llama.py .\TehVenom_Metharme-13b-Merged c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors Metharme-13b-GPTQ-4bit-128g.no-act-order.safetensors
30
  ```
31
 
32
  ## Prompting
@@ -124,7 +124,7 @@ Current evals out of the Metharme-13b model: <br>
124
  <td>7.038073539733887</td>
125
  </tr>
126
  <tr>
127
- <td>Metharme 13b - 4bit - [true-sequential & 128g]</td>
128
  <td>TBD</td>
129
  <td>TBD</td>
130
  <td>TBD</td>
 
26
 
27
  It has also been quantized down to 4Bit using the GPTQ library available here: https://github.com/0cc4m/GPTQ-for-LLaMa
28
  ```
29
+ python llama.py .\TehVenom_Metharme-13b-Merged c4 --wbits 8 --act-order --save_safetensors Metharme-13b-GPTQ-8bit.act-order.safetensors
30
  ```
31
 
32
  ## Prompting
 
124
  <td>7.038073539733887</td>
125
  </tr>
126
  <tr>
127
+ <td>Metharme 13b - 8bit - [true-sequential & 128g]</td>
128
  <td>TBD</td>
129
  <td>TBD</td>
130
  <td>TBD</td>