Bielik-11B-v2.2
Collection
A collection of models based on Bielik-11B-v2.2 - instruct and quantized versions.
•
17 items
•
Updated
•
27
This model was converted to Quanto format from SpeakLeash's Bielik-11B-v.2.2-Instruct.
DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!
Optimum Quanto is a pytorch quantization backend for optimum. Model can be loaded using:
from optimum.quanto import QuantizedModelForCausalLM
qmodel = QuantizedModelForCausalLM.from_pretrained('speakleash/Bielik-11B-v2.2-Instruct-Quanto-4bit')
If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our Discord SpeakLeash.
Base model
speakleash/Bielik-11B-v2