File size: 5,809 Bytes
9f0902e 6cd04ac 9f0902e 6cd04ac 9f0902e 6cd04ac 9f0902e 6cd04ac 9f0902e 6cd04ac 9f0902e 6cd04ac 9f0902e 6cd04ac f4ea679 6cd04ac f4ea679 6cd04ac f4ea679 6cd04ac f4ea679 6cd04ac f4ea679 6cd04ac f4ea679 6cd04ac f4ea679 6cd04ac f4ea679 6cd04ac |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
license: cc-by-nc-4.0
quantized_by: bartowski
pipeline_tag: text-generation
lm_studio:
param_count: 35b
use_case: general
release_date: 11-03-2024
model_creator: CohereForAI
prompt_template: cohere_command_r
system_prompt: none
base_model: cohere
original_repo: CohereForAI/c4ai-command-r-v01
---
## 💫 Community Model> C4AI Command-R 35B by Cohere For AI
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Cohere For AI](https://huggingface.co/CohereForAI)<br>
**Original model**: [c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2536](https://github.com/ggerganov/llama.cpp/releases/tag/b2536)<br>
## Model Summary:
C4AI Command-R is a 35B parameter model released by Cohere For AI. Command-R is optimized for a variety of use cases including reasoning, summarization, and question answering. Command-R has the capability for multilingual generation evaluated in 10 languages and highly performant RAG capabilities.
## Prompt Template:
Choose the `Cohere Command R` preset in your LM Studio.
Under the hoos, the model will see a prompt that's formatted like so:
```
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
```
Note that this model does not support a System prompt.
## Use case and examples
Command-R is tuned for a broad range of uses such as reasoning, summarizing, general knowledge, and coding.
### Reasoning:
```
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>
Statements:
All mathematicians are logical thinkers.
No logical thinker is irrational.
Some scientists are mathematicians.
Conclusions:
I. Some scientists are logical thinkers.
II. No mathematician is irrational.
III. Some scientists are irrational.
Analyze these statements and decide which conclusions are correct based on the logical relationships.<|END_OF_TURN_TOKEN|>
<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
The logical relationships between the statements and the correctness of each conclusion are as follows:
I. Some scientists are logical thinkers: This conclusion follows logically from the given statements. Since some scientists are mathematicians and all mathematicians are logical thinkers, it can be deduced that some scientists indeed exhibit logical thinking. This conclusion is correct.
II. No mathematician is irrational: This conclusion is a direct inference from the given statements. As no logical thinker is irrational, and all mathematicians are logical thinkers, it's reasonable to assume that no mathematician can be described as irrational. This conclusion is also correct.
III. Some scientists are irrational: This conclusion does not necessarily follow from the given statements. While it's true that not all scientists may be logical thinkers (as indicated in the second conclusion), it doesn't imply that they are irrational. The statement about scientists being irrational cannot be definitively concluded from the given information, so this conclusion is not correct.
In summary, only I and II are correct conclusions based on the logical relationships within the statements provided.<|END_OF_TURN_TOKEN|>
```
### Coding:
```
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>
Can you provide a merge sort implementation in python?<|END_OF_TURN_TOKEN|>
<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
Here's a merge sort implementation in Python:
def merge_sort(arr):
if len(arr) > 1:
mid = len(arr) // 2
left = arr[:mid]
right = arr[mid:]
merge_sort(left)
merge_sort(right)
i = j = k = 0
while i < len(left) and j < len(right):
if left[i] < right[j]:
arr[k] = left[i]
i += 1
else:
arr[k] = right[j]
j += 1
k += 1
while i < len(left):
arr[k] = left[i]
i += 1
k += 1
while j < len(right):
arr[k] = right[j]
j += 1
k += 1
# Example usage
my_list = [12, 11, 13, 5, 6, 7]
merge_sort(my_list)
print(my_list)
The `merge_sort` function takes a list `arr` as input and recursively divides it into two halves until each half contains a single element or is empty. It then merges the two sorted halves in a sorted manner. The example usage demonstrates how to use the function to sort a list.<|END_OF_TURN_TOKEN|>
```
## Technical Details
This model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.
Multilingual capabilities: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.
Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.
Supports a context length of 128k.
For more information on prompting, you can reference the official documentation [here](https://docs.cohere.com/docs/prompting-command-r)
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
TBD |