--- base_model: google/madlad400-10b-mt inference: false license: apache-2.0 model_name: madlad400-10b-mt-gguf pipeline_tag: translation --- # MADLAD-400-10B-MT - GGUF - Original model: [MADLAD-400-10B-MT](https://huggingface.co/google/madlad400-10b-mt) ## Description This repo contains GGUF format model files for [MADLAD-400-10B-MT](https://huggingface.co/google/madlad400-10b-mt) for use with [llama.cpp](https://github.com/ggerganov/llama.cpp) and compatible software. Converted to gguf using llama.cpp [convert_hf_to_gguf.py](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py) and quantized using llama.cpp llama-quantize, llama.cpp version [b3325](https://github.com/ggerganov/llama.cpp/commits/b3325).