Triangle104 commited on
Commit
d07c6be
1 Parent(s): 533534e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -2
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: other
3
  base_model: TheDrummer/Ministrations-8B-v1
4
  tags:
5
  - llama-cpp
@@ -10,6 +10,30 @@ tags:
10
  This model was converted to GGUF format from [`TheDrummer/Ministrations-8B-v1`](https://huggingface.co/TheDrummer/Ministrations-8B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
11
  Refer to the [original model card](https://huggingface.co/TheDrummer/Ministrations-8B-v1) for more details on the model.
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ## Use with llama.cpp
14
  Install llama.cpp through brew (works on Mac and Linux)
15
 
@@ -48,4 +72,4 @@ Step 3: Run inference through the main binary.
48
  or
49
  ```
50
  ./llama-server --hf-repo Triangle104/Ministrations-8B-v1-Q5_K_S-GGUF --hf-file ministrations-8b-v1-q5_k_s.gguf -c 2048
51
- ```
 
1
  ---
2
+ license: apache-2.0
3
  base_model: TheDrummer/Ministrations-8B-v1
4
  tags:
5
  - llama-cpp
 
10
  This model was converted to GGUF format from [`TheDrummer/Ministrations-8B-v1`](https://huggingface.co/TheDrummer/Ministrations-8B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
11
  Refer to the [original model card](https://huggingface.co/TheDrummer/Ministrations-8B-v1) for more details on the model.
12
 
13
+ ---
14
+ Model details:
15
+ -
16
+ BeaverAI proudly presents...
17
+
18
+ Ministrations 8B v1
19
+
20
+ An RP finetune of Ministral 8B!
21
+
22
+ Supported Chat Templates
23
+ -
24
+ Metharme (a.k.a. Pygmalion in ST)
25
+ Mistral Tekken
26
+ You can mix it up and see which works best for you.
27
+
28
+ Link
29
+ -
30
+ Original: https://huggingface.co/TheDrummer/Ministrations-8B-v1
31
+
32
+ Favorite RP Format
33
+ -
34
+ *action* Dialogue *thoughts* Dialogue *narration* in 1st person PoV
35
+
36
+ ---
37
  ## Use with llama.cpp
38
  Install llama.cpp through brew (works on Mac and Linux)
39
 
 
72
  or
73
  ```
74
  ./llama-server --hf-repo Triangle104/Ministrations-8B-v1-Q5_K_S-GGUF --hf-file ministrations-8b-v1-q5_k_s.gguf -c 2048
75
+ ```