Update README.md
Browse files
README.md
CHANGED
@@ -3,9 +3,9 @@ library_name: transformers
|
|
3 |
license: apache-2.0
|
4 |
---
|
5 |
|
6 |
-
# Model card for
|
7 |
|
8 |
-
|
9 |
|
10 |
|
11 |
## Instruction format
|
@@ -25,13 +25,7 @@ This instruction model is based on Mistral-7B-v0.2, a transformer model with the
|
|
25 |
- Sliding-Window Attention
|
26 |
- Byte-fallback BPE tokenizer
|
27 |
|
28 |
-
|
29 |
-
- [UA-SQUAD](https://huggingface.co/datasets/FIdo-AI/ua-squad/resolve/main/ua_squad_dataset.json)
|
30 |
-
- [Ukrainian StackExchange](https://huggingface.co/datasets/zeusfsx/ukrainian-stackexchange)
|
31 |
-
- [UAlpaca Dataset](https://github.com/robinhad/kruk/blob/main/data/cc-by-nc/alpaca_data_translated.json)
|
32 |
-
- [Ukrainian Subset from Belebele Dataset](https://github.com/facebookresearch/belebele)
|
33 |
-
- [Ukrainian Subset from XQA](https://github.com/thunlp/XQA)
|
34 |
-
|
35 |
## 💻 Usage
|
36 |
|
37 |
```python
|
@@ -41,7 +35,7 @@ from transformers import AutoTokenizer
|
|
41 |
import transformers
|
42 |
import torch
|
43 |
|
44 |
-
model = "Radu1999/
|
45 |
messages = [{"role": "user", "content": "What is a large language model?"}]
|
46 |
|
47 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
|
|
3 |
license: apache-2.0
|
4 |
---
|
5 |
|
6 |
+
# Model card for MisterUkrainianDPO
|
7 |
|
8 |
+
DPO Iteration of [MisterUkrainian](https://huggingface.co/Radu1999/MisterUkrainian)
|
9 |
|
10 |
|
11 |
## Instruction format
|
|
|
25 |
- Sliding-Window Attention
|
26 |
- Byte-fallback BPE tokenizer
|
27 |
|
28 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
## 💻 Usage
|
30 |
|
31 |
```python
|
|
|
35 |
import transformers
|
36 |
import torch
|
37 |
|
38 |
+
model = "Radu1999/MisterUkrainianDPO"
|
39 |
messages = [{"role": "user", "content": "What is a large language model?"}]
|
40 |
|
41 |
tokenizer = AutoTokenizer.from_pretrained(model)
|