Text Generation
Transformers
Safetensors
English
mistral
conversational
text-generation-inference
Inference Endpoints
6-bit
exl2

Hello???

#1
by MarinaraSpaghetti - opened

I feel called out.

pepo21.gif

Owner

Haha, don't get me wrong. I do appreciate the effort you put into your models and I wouldn't have quanted them in EXL2 if I didn't feel it wasn't worthwhile. It's more just that quanting into EXL2 can take a few hours during which I can't do anything on my comp. Have to pick and choose my battles, so to say.

Haha, don’t worry, I’m just messing with ya. I’m really appreciative of the time and effort you put into the quants. 🙏 The wait time is why I ultimately ditched doing exl2 quants and switched to GGUFs since I can make them in 5 minutes, haha. Keep up the amazing work!

Sign up or log in to comment