bartowski commited on
Commit
6cd04ac
1 Parent(s): 9f0902e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +119 -19
README.md CHANGED
@@ -14,29 +14,129 @@ language:
14
  license: cc-by-nc-4.0
15
  quantized_by: bartowski
16
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
17
  ---
18
 
19
- ## Llamacpp Quantizations of c4ai-command-r-v01
20
 
21
- Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2440">b2440</a> for quantization.
22
 
23
- Original model: https://huggingface.co/CohereForAI/c4ai-command-r-v01
 
 
24
 
25
- Download a file (not the whole branch) from below:
26
 
27
- | Filename | Quant type | File Size | Description |
28
- | -------- | ---------- | --------- | ----------- |
29
- | [c4ai-command-r-v01-Q8_0.gguf](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF/blob/main/c4ai-command-r-v01-Q8_0.gguf) | Q8_0 | 37.17GB | Extremely high quality, generally unneeded but max available quant. |
30
- | [c4ai-command-r-v01-Q6_K.gguf](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF/blob/main/c4ai-command-r-v01-Q6_K.gguf) | Q6_K | 28.70GB | Very high quality, near perfect, *recommended*. |
31
- | [c4ai-command-r-v01-Q5_K_M.gguf](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF/blob/main/c4ai-command-r-v01-Q5_K_M.gguf) | Q5_K_M | 25.00GB | High quality, very usable. |
32
- | [c4ai-command-r-v01-Q5_K_S.gguf](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF/blob/main/c4ai-command-r-v01-Q5_K_S.gguf) | Q5_K_S | 24.33GB | High quality, very usable. |
33
- | [c4ai-command-r-v01-Q5_0.gguf](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF/blob/main/c4ai-command-r-v01-Q5_0.gguf) | Q5_0 | 24.33GB | High quality, older format, generally not recommended. |
34
- | [c4ai-command-r-v01-Q4_K_M.gguf](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF/blob/main/c4ai-command-r-v01-Q4_K_M.gguf) | Q4_K_M | 21.52GB | Good quality, similar to 4.25 bpw. |
35
- | [c4ai-command-r-v01-Q4_K_S.gguf](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF/blob/main/c4ai-command-r-v01-Q4_K_S.gguf) | Q4_K_S | 20.37GB | Slightly lower quality with small space savings. |
36
- | [c4ai-command-r-v01-Q4_0.gguf](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF/blob/main/c4ai-command-r-v01-Q4_0.gguf) | Q4_0 | 20.22GB | Decent quality, older format, generally not recommended. |
37
- | [c4ai-command-r-v01-Q3_K_L.gguf](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF/blob/main/c4ai-command-r-v01-Q3_K_L.gguf) | Q3_K_L | 19.14GB | Lower quality but usable, good for low RAM availability. |
38
- | [c4ai-command-r-v01-Q3_K_M.gguf](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF/blob/main/c4ai-command-r-v01-Q3_K_M.gguf) | Q3_K_M | 17.61GB | Even lower quality. |
39
- | [c4ai-command-r-v01-Q3_K_S.gguf](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF/blob/main/c4ai-command-r-v01-Q3_K_S.gguf) | Q3_K_S | 15.86GB | Low quality, not recommended. |
40
- | [c4ai-command-r-v01-Q2_K.gguf](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF/blob/main/c4ai-command-r-v01-Q2_K.gguf) | Q2_K | 13.81GB | Extremely low quality, *not* recommended.
41
 
42
- Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  license: cc-by-nc-4.0
15
  quantized_by: bartowski
16
  pipeline_tag: text-generation
17
+ lm_studio:
18
+ param_count: 35b
19
+ use_case: general
20
+ release_date: 11-03-2024
21
+ model_creator: CohereForAI
22
+ prompt_template: cohere_command_r
23
+ system_prompt: none
24
+ base_model: cohere
25
+ original_repo: CohereForAI/c4ai-command-r-v01
26
  ---
27
 
28
+ ## 💫 Community Model> C4AI Command-R 35B by Cohere For AI
29
 
30
+ *👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
31
 
32
+ **Model creator:** [Cohere For AI](https://huggingface.co/CohereForAI)<br>
33
+ **Original model**: [c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)<br>
34
+ **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2536](https://github.com/ggerganov/llama.cpp/releases/tag/b2536)<br>
35
 
36
+ ## Model Summary:
37
 
38
+ C4AI Command-R is a 35B parameter model released by Cohere For AI. Command-R is optimized for a variety of use cases including reasoning, summarization, and question answering. Command-R has the capability for multilingual generation evaluated in 10 languages and highly performant RAG capabilities.
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
+ ## Prompt Template:
41
+
42
+ Choose the `Cohere Command R` preset in your LM Studio.
43
+
44
+ Under the hoos, the model will see a prompt that's formatted like so:
45
+ ```
46
+ <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
47
+ ```
48
+
49
+ ## Use case and examples
50
+
51
+ Command-R is tuned for a broad range of uses such as reasoning, summarizing, general knowledge, and coding.
52
+
53
+ ### Reasoning:
54
+
55
+ <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>
56
+ Statements:
57
+
58
+ All mathematicians are logical thinkers.
59
+ No logical thinker is irrational.
60
+ Some scientists are mathematicians.
61
+ Conclusions:
62
+
63
+ I. Some scientists are logical thinkers.
64
+ II. No mathematician is irrational.
65
+ III. Some scientists are irrational.
66
+
67
+ Analyze these statements and decide which conclusions are correct based on the logical relationships.<|END_OF_TURN_TOKEN|>
68
+ <|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
69
+ The logical relationships between the statements and the correctness of each conclusion are as follows:
70
+
71
+ I. Some scientists are logical thinkers: This conclusion follows logically from the given statements. Since some scientists are mathematicians and all mathematicians are logical thinkers, it can be deduced that some scientists indeed exhibit logical thinking. This conclusion is correct.
72
+
73
+ II. No mathematician is irrational: This conclusion is a direct inference from the given statements. As no logical thinker is irrational, and all mathematicians are logical thinkers, it's reasonable to assume that no mathematician can be described as irrational. This conclusion is also correct.
74
+
75
+ III. Some scientists are irrational: This conclusion does not necessarily follow from the given statements. While it's true that not all scientists may be logical thinkers (as indicated in the second conclusion), it doesn't imply that they are irrational. The statement about scientists being irrational cannot be definitively concluded from the given information, so this conclusion is not correct.
76
+
77
+ In summary, only I and II are correct conclusions based on the logical relationships within the statements provided.<|END_OF_TURN_TOKEN|>
78
+
79
+ ### Coding:
80
+
81
+ <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>
82
+ Can you provide a merge sort implementation in python?<|END_OF_TURN_TOKEN|>
83
+ <|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
84
+ Here's a merge sort implementation in Python:
85
+ ```python
86
+ def merge_sort(arr):
87
+ if len(arr) > 1:
88
+ mid = len(arr) // 2
89
+ left = arr[:mid]
90
+ right = arr[mid:]
91
+
92
+ merge_sort(left)
93
+ merge_sort(right)
94
+
95
+ i = j = k = 0
96
+
97
+ while i < len(left) and j < len(right):
98
+ if left[i] < right[j]:
99
+ arr[k] = left[i]
100
+ i += 1
101
+ else:
102
+ arr[k] = right[j]
103
+ j += 1
104
+ k += 1
105
+
106
+ while i < len(left):
107
+ arr[k] = left[i]
108
+ i += 1
109
+ k += 1
110
+
111
+ while j < len(right):
112
+ arr[k] = right[j]
113
+ j += 1
114
+ k += 1
115
+
116
+ # Example usage
117
+ my_list = [12, 11, 13, 5, 6, 7]
118
+ merge_sort(my_list)
119
+ print(my_list)
120
+ ```
121
+
122
+ The `merge_sort` function takes a list `arr` as input and recursively divides it into two halves until each half contains a single element or is empty. It then merges the two sorted halves in a sorted manner. The example usage demonstrates how to use the function to sort a list.<|END_OF_TURN_TOKEN|>
123
+
124
+ ## Technical Details
125
+
126
+ This model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.
127
+
128
+ Multilingual capabilities: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.
129
+
130
+ Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.
131
+
132
+ Supports a context length of 128k.
133
+
134
+ For more information on prompting, you can reference the official documentation [here](https://docs.cohere.com/docs/prompting-command-r)
135
+
136
+ ## Special thanks
137
+
138
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
139
+
140
+ ## Disclaimers
141
+
142
+ TBD