luca10g/Suzume-llama-3-8B-multilingual-Q6K-GGUF
At the time of converting this model, there was no Q6K version available.
This model was converted to GGUF format from lightblue/suzume-llama-3-8B-multilingual
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
- Downloads last month
- 3
Model tree for luca10g/Suzume-llama-3-8B-multilingual-Q6K-GGUF
Base model
meta-llama/Meta-Llama-3-8B-Instruct