File size: 334 Bytes
93ecc83 7a1b503 |
1 2 3 4 5 6 7 8 9 10 11 12 |
---
license: apache-2.0
---
# Poro-34B-gguf
This is a GGUF quantization of the [Poro-34B](https://huggingface.co/LumiOpen/Poro-34B) model.
Please refer to that repository's model card for details.
The conversion was done with [llama.cpp](https://github.com/ggerganov/llama.cpp) version `bb50a792ec2a49944470c82694fa364345e95170`.
|