Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama2
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
---
|
6 |
+
# Model Card for mncai/mistral-7b-v5
|
7 |
+
|
8 |
+
### Introduction of MindsAndCompany
|
9 |
+
|
10 |
+
https://mnc.ai/
|
11 |
+
|
12 |
+
We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).
|
13 |
+
|
14 |
+
### Model Summary
|
15 |
+
based mistral-7b, instruction tuned and dpo.
|
16 |
+
|
17 |
+
|
18 |
+
### How to Use
|
19 |
+
Here give some examples of how to use our model.
|
20 |
+
|
21 |
+
```python
|
22 |
+
from transformers import AutoConfig, AutoModel, AutoTokenizer
|
23 |
+
import transformers
|
24 |
+
import torch
|
25 |
+
hf_model = 'mncai/mistral-7b-v5'
|
26 |
+
message = "### Instruction:\n\n๋ ๊ฐ์ ๊ตฌ๊ฐ ๊ฐ๊ฐ ์ง๋ฆ์ด 1, 2์ผ๋ ๋ ๊ตฌ์ ๋ถํผ๋ ๋ช๋ฐฐ์ง? ์ค๋ช
๋ ๊ฐ์ด ํด์ค.\n\n### Response:\n"
|
27 |
+
|
28 |
+
sequences = pipeline(
|
29 |
+
message,
|
30 |
+
do_sample=True,
|
31 |
+
top_k=10,
|
32 |
+
num_return_sequences=1,
|
33 |
+
eos_token_id=tokenizer.eos_token_id,
|
34 |
+
max_length=2048,
|
35 |
+
)
|
36 |
+
for seq in sequences:
|
37 |
+
print(f"Result: {seq['generated_text']}")
|
38 |
+
```
|
39 |
+
|
40 |
+
|
41 |
+
### Contact
|
42 |
+
If you have any questions, please raise an issue or contact us at dwmyoung@mnc.ai
|