TheBloke/CodeLlama-13B-Python-fp16 GGUF Quantizations π
Optimized GGUF quantization files for enhanced model performance
Powered by Featherless AI - run any model you'd like for a simple small fee.
Available Quantizations π
Quantization Type | File | Size |
---|---|---|
IQ4_XS | TheBloke-CodeLlama-13B-Python-fp16-IQ4_XS.gguf | 6694.33 MB |
Q2_K | TheBloke-CodeLlama-13B-Python-fp16-Q2_K.gguf | 4629.39 MB |
Q3_K_L | TheBloke-CodeLlama-13B-Python-fp16-Q3_K_L.gguf | 6608.54 MB |
Q3_K_M | TheBloke-CodeLlama-13B-Python-fp16-Q3_K_M.gguf | 6044.17 MB |
Q3_K_S | TheBloke-CodeLlama-13B-Python-fp16-Q3_K_S.gguf | 5396.82 MB |
Q4_K_M | TheBloke-CodeLlama-13B-Python-fp16-Q4_K_M.gguf | 7501.56 MB |
Q4_K_S | TheBloke-CodeLlama-13B-Python-fp16-Q4_K_S.gguf | 7079.30 MB |
Q5_K_M | TheBloke-CodeLlama-13B-Python-fp16-Q5_K_M.gguf | 8802.34 MB |
Q5_K_S | TheBloke-CodeLlama-13B-Python-fp16-Q5_K_S.gguf | 8556.64 MB |
Q6_K | TheBloke-CodeLlama-13B-Python-fp16-Q6_K.gguf | 10184.42 MB |
Q8_0 | TheBloke-CodeLlama-13B-Python-fp16-Q8_0.gguf | 13190.57 MB |
β‘ Powered by Featherless AI
Key Features
- π₯ Instant Hosting - Deploy any Llama model on HuggingFace instantly
- π οΈ Zero Infrastructure - No server setup or maintenance required
- π Vast Compatibility - Support for 2400+ models and counting
- π Affordable Pricing - Starting at just $10/month
Links:
Get Started | Documentation | Models
- Downloads last month
- 244
Model tree for featherless-ai-quants/TheBloke-CodeLlama-13B-Python-fp16-GGUF
Base model
TheBloke/CodeLlama-13B-Python-fp16