Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,13 @@ datasets:
|
|
13 |
---
|
14 |
|
15 |
# v8karlo/TinyDolphin-2.8.1-1.1b-Q4_K_M-GGUF
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
This model was converted to GGUF format from [`cognitivecomputations/TinyDolphin-2.8.1-1.1b`](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.1-1.1b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
17 |
Refer to the [original model card](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.1-1.1b) for more details on the model.
|
18 |
|
|
|
13 |
---
|
14 |
|
15 |
# v8karlo/TinyDolphin-2.8.1-1.1b-Q4_K_M-GGUF
|
16 |
+
|
17 |
+
UNCENSORED model.
|
18 |
+
In order to make your GGUF file type go to original Tiny Dolphin repo.
|
19 |
+
https://huggingface.co/cognitivecomputations/TinyDolphin-2.8-1.1b?text=My+name+is+Teven+and+I+am+a+20-year-old+college+student+from+the+University+of+Kansas.+I+have+a+passion .
|
20 |
+
Copy name of the model TinyDolphin-2.8-1.1b.
|
21 |
+
Go to Convert-to-GGUF repo https://huggingface.co/spaces/ggml-org/gguf-my-repo and paste model name into Hub Model ID field, choose Quantization Method and press Submit button.
|
22 |
+
|
23 |
This model was converted to GGUF format from [`cognitivecomputations/TinyDolphin-2.8.1-1.1b`](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.1-1.1b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
24 |
Refer to the [original model card](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.1-1.1b) for more details on the model.
|
25 |
|