Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Joseph717171
/
Models
like
3
GGUF
Inference Endpoints
conversational
Model card
Files
Files and versions
Community
Deploy
Use this model
1f86e69
Models
1 contributor
History:
50 commits
Joseph717171
Upload Mistral-Nemo-12B-Instruct-2407-Q8_0.IQ6_K.gguf with huggingface_hub
1f86e69
verified
4 months ago
.gitattributes
Safe
3.99 kB
Upload Mistral-Nemo-12B-Instruct-2407-Q8_0.IQ6_K.gguf with huggingface_hub
4 months ago
Hathor_Fractionate-L3-8B-v.05-F32.IQ4_K_M.gguf
Safe
8.4 GB
LFS
Upload Hathor_Fractionate-L3-8B-v.05-F32.IQ4_K_M.gguf with huggingface_hub
4 months ago
Llama-3SOME-8B-v2-F32.IQ4_K_M.gguf
Safe
8.4 GB
LFS
Upload Llama-3SOME-8B-v2-F32.IQ4_K_M.gguf with huggingface_hub
4 months ago
Meta-Llama-3-8B-Instruct-BF16.IQ5_K_M.gguf
Safe
7.04 GB
LFS
Upload Meta-Llama-3-8B-Instruct-BF16.IQ5_K_M.gguf with huggingface_hub
4 months ago
Meta-Llama-3-8B-Instruct-BF16.IQ6_K.gguf
Safe
7.84 GB
LFS
Upload Meta-Llama-3-8B-Instruct-BF16.IQ6_K.gguf with huggingface_hub
4 months ago
Meta-Llama-3-8B-Instruct-F32.IQ4_K_M.gguf
Safe
8.4 GB
LFS
Upload Meta-Llama-3-8B-Instruct-F32.IQ4_K_M.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-8B-Instruct-F16.Q8_0.gguf
Safe
9.53 GB
LFS
Upload Meta-Llama-3.1-8B-Instruct-F16.Q8_0.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-8B-Instruct-F32.IQ6_K.gguf
Safe
9.94 GB
LFS
Upload Meta-Llama-3.1-8B-Instruct-F32.IQ6_K.gguf with huggingface_hub
4 months ago
Meta-Llama-3.1-8B-Instruct-F32.Q6_K.gguf
Safe
9.94 GB
LFS
Upload Meta-Llama-3.1-8B-Instruct-F32.Q6_K.gguf with huggingface_hub
4 months ago
Mistral-Nemo-12B-Instruct-2407-Q8_0.IQ6_K.gguf
Safe
10.4 GB
LFS
Upload Mistral-Nemo-12B-Instruct-2407-Q8_0.IQ6_K.gguf with huggingface_hub
4 months ago
Phi-3-mini-4k-instruct-F16.IQ3_K_M.gguf
Safe
2.23 GB
LFS
Upload Phi-3-mini-4k-instruct-F16.IQ3_K_M.gguf with huggingface_hub
4 months ago
Phi-3-mini-4k-instruct-F32.IQ3_K_M.gguf
Safe
2.62 GB
LFS
Upload Phi-3-mini-4k-instruct-F32.IQ3_K_M.gguf with huggingface_hub
4 months ago
Phi-3-mini-4k-instruct-F32.IQ8_0.gguf
Safe
4.64 GB
LFS
Upload Phi-3-mini-4k-instruct-F32.IQ8_0.gguf with huggingface_hub
4 months ago
Phi-3-mini-4k-instruct-IQ3_K_M.gguf
Safe
1.96 GB
LFS
Upload Phi-3-mini-4k-instruct-IQ3_K_M.gguf with huggingface_hub
4 months ago
Phi-3-mini-4k-instruct-Q8_0.IQ3_K_M.gguf
Safe
2.04 GB
LFS
Rename Phi-3-mini-4k-instruct-q8_0.IQ3_K_M.gguf to Phi-3-mini-4k-instruct-Q8_0.IQ3_K_M.gguf
4 months ago
Replete-Coder-Instruct-8b-Adapted-Merged-F32.IQ4_K_M.gguf
Safe
8.4 GB
LFS
Upload Replete-Coder-Instruct-8b-Adapted-Merged-F32.IQ4_K_M.gguf with huggingface_hub
4 months ago
Replete-Coder-Instruct-8b-Adapted-Merged-F32.IQ6_K.gguf
Safe
9.94 GB
LFS
Upload Replete-Coder-Instruct-8b-Adapted-Merged-F32.IQ6_K.gguf with huggingface_hub
4 months ago
Smegmma-Deluxe-9B-v1-F32.IQ4_K_M.gguf
Safe
8.68 GB
LFS
Upload Smegmma-Deluxe-9B-v1-F32.IQ4_K_M.gguf with huggingface_hub
4 months ago
gemma-2-9b-it-F16.IQ3_K_M.gguf
Safe
5.84 GB
LFS
Upload gemma-2-9b-it-F16.IQ3_K_M.gguf with huggingface_hub
4 months ago
gemma-2-9b-it-F32.IQ3_K_M.gguf
Safe
7.68 GB
LFS
Upload gemma-2-9b-it-F32.IQ3_K_M.gguf with huggingface_hub
4 months ago
gemma-2-9b-it-Q8_0.IQ3_K_M.gguf
Safe
4.98 GB
LFS
Upload gemma-2-9b-it-Q8_0.IQ3_K_M.gguf with huggingface_hub
4 months ago
gemma-2-9b-it.IQ3_K_M.gguf
Safe
4.76 GB
LFS
Upload gemma-2-9b-it.IQ3_K_M.gguf with huggingface_hub
4 months ago