Poro-34B-gguf / README.md
akx's picture
Update README.md
7a1b503
|
raw
history blame
334 Bytes
metadata
license: apache-2.0

Poro-34B-gguf

This is a GGUF quantization of the Poro-34B model.

Please refer to that repository's model card for details.

The conversion was done with llama.cpp version bb50a792ec2a49944470c82694fa364345e95170.