Using Prompt Template
#13 opened 4 months ago
by
fredrohn
No tokenizer available?
1
#10 opened 10 months ago
by
dspyrhsu
How is it possible that Q4_K_M performs better than any Q5, Q6 and even Q8?
1
#8 opened 12 months ago
by
alexcardo
[AUTOMATED] Model Memory Requirements
#7 opened 12 months ago
by
model-sizer-bot
Failed to create LLM 'zephyr' from '/models/zephyr-7b-alpha.Q5_K_M.gguf'.
#6 opened about 1 year ago
by
whoknowsmeinhf
Addressing Inconsistencies in Model Outputs: Understanding and Solutions
#5 opened about 1 year ago
by
shivammehta
zhapyer
#4 opened about 1 year ago
by
bharathi1604
Will there be a re-upload of this model?
2
#3 opened about 1 year ago
by
SolidSnacke
Free and ready to use zephyr-7B-beta-GGUF model as OpenAI API compatible endpoint
12
#2 opened about 1 year ago
by
limcheekin
Possible Loading Error with GPT4All
11
#1 opened about 1 year ago
by
deleted