bartowski/Ministral-8B-Instruct-2410-GGUF
Text Generation
•
Updated
•
19.8k
•
22
Could you check this one out? Found in the wild with an interesting claim.
https://huggingface.co/noneUsername/TouchNight-Ministral-8B-Instruct-2410-HF-W8A8-Dynamic-Per-Token
It is worth noting that compared with the prince-canuma version, this version is smaller in size after quantization and its accuracy is also improved by one percentage point.