Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
ikawrakow
/
mixtral-instruct-8x7b-quantized-gguf
like
22
GGUF
Inference Endpoints
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
3
Deploy
Use this model
main
mixtral-instruct-8x7b-quantized-gguf
1 contributor
History:
7 commits
ikawrakow
Adding IQ3_XSS and fixed medium models
0c06c11
9 months ago
.gitattributes
Safe
1.56 kB
Adding k-quants
10 months ago
README.md
Safe
1.82 kB
Update README.md
10 months ago
mixtral-instruct-8x7b-iq3-xxs.gguf
Safe
18.3 GB
LFS
Adding IQ3_XSS and fixed medium models
9 months ago
mixtral-instruct-8x7b-q2k.gguf
Safe
15.4 GB
LFS
Adding k-quants
10 months ago
mixtral-instruct-8x7b-q3k-medium.gguf
Safe
22.5 GB
LFS
Adding IQ3_XSS and fixed medium models
9 months ago
mixtral-instruct-8x7b-q3k-small.gguf
Safe
20.3 GB
LFS
Adding k-quants
10 months ago
mixtral-instruct-8x7b-q40.gguf
Safe
26.4 GB
LFS
Adding legacy ggml quants
10 months ago
mixtral-instruct-8x7b-q41.gguf
Safe
29.3 GB
LFS
Adding legacy ggml quants
10 months ago
mixtral-instruct-8x7b-q4k-medium.gguf
Safe
28.4 GB
LFS
Adding IQ3_XSS and fixed medium models
9 months ago
mixtral-instruct-8x7b-q4k-small.gguf
Safe
26.7 GB
LFS
Had somehow missed to add Q4_K_S
10 months ago
mixtral-instruct-8x7b-q50.gguf
Safe
32.2 GB
LFS
Adding legacy ggml quants
10 months ago
mixtral-instruct-8x7b-q5k-small.gguf
Safe
32.2 GB
LFS
Fixed wrong file name
10 months ago