ZeroWw's picture
Upload folder using huggingface_hub
2ad0a70 verified
|
raw
history blame
339 Bytes
---
license: mit
language:
- en
pipeline_tag: text-generation
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
Updated on: Tue Aug 20, 08:52:15