Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
mav23
/
WizardLM-70B-V1.0-GGUF
like
0
GGUF
Inference Endpoints
arxiv:
2304.12244
arxiv:
2306.08568
arxiv:
2308.09583
License:
llama2
Model card
Files
Files and versions
Community
Deploy
Use this model
main
WizardLM-70B-V1.0-GGUF
1 contributor
History:
15 commits
mav23
Upload folder using huggingface_hub
295d70c
verified
15 days ago
.gitattributes
2.43 kB
Upload folder using huggingface_hub
15 days ago
README.md
10 kB
Upload folder using huggingface_hub
16 days ago
wizardlm-70b-v1.0.Q2_K.gguf
25.5 GB
LFS
Upload folder using huggingface_hub
15 days ago
wizardlm-70b-v1.0.Q3_K.gguf
33.3 GB
LFS
Upload folder using huggingface_hub
15 days ago
wizardlm-70b-v1.0.Q3_K_L.gguf
36.1 GB
LFS
Upload folder using huggingface_hub
15 days ago
wizardlm-70b-v1.0.Q3_K_M.gguf
33.3 GB
LFS
Upload folder using huggingface_hub
15 days ago
wizardlm-70b-v1.0.Q3_K_S.gguf
29.9 GB
LFS
Upload folder using huggingface_hub
15 days ago
wizardlm-70b-v1.0.Q4_0.gguf
38.9 GB
LFS
Upload folder using huggingface_hub
16 days ago
wizardlm-70b-v1.0.Q4_1.gguf
43.2 GB
LFS
Upload folder using huggingface_hub
15 days ago
wizardlm-70b-v1.0.Q4_K.gguf
41.4 GB
LFS
Upload folder using huggingface_hub
15 days ago
wizardlm-70b-v1.0.Q4_K_M.gguf
41.4 GB
LFS
Upload folder using huggingface_hub
15 days ago
wizardlm-70b-v1.0.Q4_K_S.gguf
39.2 GB
LFS
Upload folder using huggingface_hub
15 days ago
wizardlm-70b-v1.0.Q5_0.gguf
47.5 GB
LFS
Upload folder using huggingface_hub
15 days ago
wizardlm-70b-v1.0.Q5_K.gguf
48.8 GB
LFS
Upload folder using huggingface_hub
15 days ago
wizardlm-70b-v1.0.Q5_K_M.gguf
48.8 GB
LFS
Upload folder using huggingface_hub
15 days ago
wizardlm-70b-v1.0.Q5_K_S.gguf
47.5 GB
LFS
Upload folder using huggingface_hub
15 days ago