Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
dranger003
/
bagel-dpo-34b-v0.5-iMat.GGUF
like
2
Text Generation
GGUF
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Deploy
Use this model
main
bagel-dpo-34b-v0.5-iMat.GGUF
2 contributors
History:
13 commits
dranger003
Chen-01AI
Update README.md with license information (
#1
)
22473a4
verified
5 months ago
.gitattributes
Safe
2.02 kB
Upload ggml-bagel-dpo-34b-v0.5-iq3_xs.gguf with huggingface_hub
7 months ago
README.md
Safe
750 Bytes
Update README.md with license information (#1)
5 months ago
ggml-bagel-dpo-34b-v0.5-f16-imatrix.dat
Safe
15.3 MB
LFS
Upload ggml-bagel-dpo-34b-v0.5-f16-imatrix.dat with huggingface_hub
7 months ago
ggml-bagel-dpo-34b-v0.5-iq2_xs.gguf
Safe
10.3 GB
LFS
Upload ggml-bagel-dpo-34b-v0.5-iq2_xs.gguf with huggingface_hub
7 months ago
ggml-bagel-dpo-34b-v0.5-iq3_xs.gguf
Safe
14.2 GB
LFS
Upload ggml-bagel-dpo-34b-v0.5-iq3_xs.gguf with huggingface_hub
7 months ago
ggml-bagel-dpo-34b-v0.5-iq4_xs.gguf
Safe
18.5 GB
LFS
Upload ggml-bagel-dpo-34b-v0.5-iq4_xs.gguf with huggingface_hub
7 months ago
ggml-bagel-dpo-34b-v0.5-q5_k.gguf
Safe
24.3 GB
LFS
Upload ggml-bagel-dpo-34b-v0.5-q5_k.gguf with huggingface_hub
7 months ago
ggml-bagel-dpo-34b-v0.5-q6_k.gguf
Safe
28.2 GB
LFS
Upload ggml-bagel-dpo-34b-v0.5-q6_k.gguf with huggingface_hub
7 months ago
ggml-bagel-dpo-34b-v0.5-q8_0.gguf
Safe
36.5 GB
LFS
Upload ggml-bagel-dpo-34b-v0.5-q8_0.gguf with huggingface_hub
7 months ago