license: apache-2.0
This is a GGUF quantization of the Poro-34B model.
Please refer to that repository's model card for details.
The conversion was done with llama.cpp version bb50a792ec2a49944470c82694fa364345e95170.
bb50a792ec2a49944470c82694fa364345e95170