GGUF
English
llama-cpp
gguf-my-repo
Inference Endpoints
v8karlo commited on
Commit
8360253
1 Parent(s): 6f44087

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -23,6 +23,12 @@ Go to Convert-to-GGUF repo https://huggingface.co/spaces/ggml-org/gguf-my-repo a
23
  This model was converted to GGUF format from [`cognitivecomputations/TinyDolphin-2.8.1-1.1b`](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.1-1.1b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
24
  Refer to the [original model card](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.1-1.1b) for more details on the model.
25
 
 
 
 
 
 
 
26
  ## Use with llama.cpp
27
  Install llama.cpp through brew (works on Mac and Linux)
28
 
 
23
  This model was converted to GGUF format from [`cognitivecomputations/TinyDolphin-2.8.1-1.1b`](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.1-1.1b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
24
  Refer to the [original model card](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.1-1.1b) for more details on the model.
25
 
26
+ Convert Safetensors to GGUF .
27
+ https://huggingface.co/spaces/ggml-org/gguf-my-repo .
28
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/662c3116277765660783ca6d/NhfH9cw1jZ77FeQV6EFT9.png)
29
+
30
+
31
+
32
  ## Use with llama.cpp
33
  Install llama.cpp through brew (works on Mac and Linux)
34