Update README.md
#21 opened 5 months ago
by
SalmanFaroz
Update README to fix install of huggingfacecli command
#20 opened 10 months ago
by
JohanDL
Hardware Requirements for Q4_K_M
4
#19 opened 10 months ago
by
ShivanshMathur007
function calling
4
#18 opened 10 months ago
by
ybsid
Can not load by ctransformers
2
#17 opened 10 months ago
by
hattran
I love the mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf
4
#16 opened 11 months ago
by
johanteekens
My cpu is only using 50% of its cores.
2
#15 opened 11 months ago
by
jeffwadsworth
Would this run on a 32GM RAM & 8GB VRAM?
1
#14 opened 11 months ago
by
Troyanovsky
Weird. Ooga is still not loading after a fresh pull from release.
3
#13 opened 11 months ago
by
moona99
Anyone else seeing similar behavior? I especially like the start "Death, ..." plus some gobblygook.
1
#12 opened 11 months ago
by
BigDeeper
Could this model deploy with fastchat? :)
#10 opened 11 months ago
by
FlameHunter
How many tokens per second?
12
#9 opened 11 months ago
by
Hoioi
KCPP frankenstein experimental release for Mixtral
1
#8 opened 11 months ago
by
Nexesenex
Not finding blk.0.ffn_gate.weight. I checked sha256sum, matches the Q6_K version. Any thoughts on how to fix this?
2
#6 opened 11 months ago
by
BigDeeper
For the time being that mode with unofficial llamacpp works terrible - bad bad in answering - Instruct version is the best all of llm ever so far.
3
#5 opened 11 months ago
by
mirek190
create_tensor: tensor 'blk.0.ffn_gate.weight' not found
9
#4 opened 11 months ago
by
Althenwolf
It works.
6
#3 opened 11 months ago
by
Yuuru
mixtral instruct too?
3
#2 opened 11 months ago
by
nbilla
Other quant types.
2
#1 opened 11 months ago
by
dog3-l0ver