Ex_y commited on
Commit
3fe73e3
1 Parent(s): a6b0a0e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -7,6 +7,10 @@ quantized_by: Ex_y
7
  base_model_relation: quantized
8
  ---
9
 
 
 
 
 
10
  # Join our Discord! https://discord.gg/Nbv9pQ88Xb
11
 
12
  ### Works on [Kobold 1.74](https://github.com/LostRuins/koboldcpp/releases/tag/v1.74)!
 
7
  base_model_relation: quantized
8
  ---
9
 
10
+ EXL2 quants of [TheDrummer/Hubble-4B-v1](https://huggingface.co/TheDrummer/Hubble-4B-v1)
11
+
12
+ Default parameter. 6.5bpw and 8.0 bpw uses 8 bit lm_head layer, while 4.25bpw and 5.0bpw uses 6 bit lm_head layer.
13
+
14
  # Join our Discord! https://discord.gg/Nbv9pQ88Xb
15
 
16
  ### Works on [Kobold 1.74](https://github.com/LostRuins/koboldcpp/releases/tag/v1.74)!