Poro-34B-gguf / README.md
akx's picture
Update README.md (#2)
739b6e1 verified
|
raw
history blame contribute delete
No virus
495 Bytes
---
license: apache-2.0
---
# Poro-34B-gguf
This is a GGUF quantization of the [Poro-34B](https://huggingface.co/LumiOpen/Poro-34B) model.
Please refer to that repository's model card for details.
The current revision is a quantization of the 1000B token checkpoint.
The conversion was done with [llama.cpp](https://github.com/ggerganov/llama.cpp) version b2354 (e25fb4b18fcedb9bed6be4585cf842e9a669b28b)
on a Google Compute machine generously sponsored by [Valohai](https://valohai.com/).