File size: 486 Bytes
15ae9ef de61af3 b62fd1c 15ae9ef de61af3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
---
license: apache-2.0
tags:
- fire
- function
- firefunction
- firefunction-v1
- gguf
- GGUF
- firefunction-v1-GGUF
- firefunction-v1-gguf
- 4-bit precision
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/653760343af2f64a0d4b60c7/k72LAqG6svkOCOYm_eDsh.png)
This is repo hosts quantized versions of the following models: https://huggingface.co/fireworks-ai/firefunction-v1
Quantization was done with this script: https://github.com/CharlesMod/quantizeHFmodel |