File size: 4,547 Bytes
bde5297
 
ef2f554
bde5297
 
ef2f554
bde5297
ef2f554
bde5297
ef2f554
 
 
 
bde5297
 
 
 
 
ef2f554
bde5297
ef2f554
bde5297
ef2f554
 
 
 
bde5297
ef2f554
bde5297
ef2f554
 
 
 
 
bde5297
ef2f554
 
 
 
 
 
9cdfca7
bde5297
ef2f554
 
bde5297
ef2f554
 
 
 
bde5297
ef2f554
 
bde5297
ef2f554
 
 
bde5297
44524f0
ef2f554
bde5297
ef2f554
 
 
 
 
 
 
 
bde5297
ef2f554
 
 
af3b3f8
 
 
 
 
 
 
c5eb6fb
d4ef332
af3b3f8
 
 
 
 
 
 
 
c5eb6fb
af3b3f8
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
library_name: transformers
license: apache-2.0
---

# Model card for Mistral-7B-Instruct-Ukrainian

Mistral-7B-UK is a Large Language Model finetuned for the Ukrainian language.

Mistral-7B-UK is trained using the following formula:
1. Initial finetuning of [Mistral-7B-v0.2](mistralai/Mistral-7B-Instruct-v0.2) using structured and unstructured datasets.
2. SLERP merge of the finetuned model with a model that performs better than `Mistral-7B-v0.2` on `OpenLLM` benchmark: [NeuralTrix-7B](https://huggingface.co/CultriX/NeuralTrix-7B-v1)
3. DPO of the final model.





## Instruction format

In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens.

E.g.
```
text = "[INST]Відповідайте лише буквою правильної відповіді: Елементи експресіонізму наявні у творі: A. «Камінний хрест», B. «Інститутка», C. «Маруся», D. «Людина»[/INST]"
```

This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:

## Model Architecture
This instruction model is based on Mistral-7B-v0.2, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer

## Datasets - Structured
- [UA-SQUAD](https://huggingface.co/datasets/FIdo-AI/ua-squad/resolve/main/ua_squad_dataset.json)
- [Ukrainian StackExchange](https://huggingface.co/datasets/zeusfsx/ukrainian-stackexchange)
- [UAlpaca Dataset](https://github.com/robinhad/kruk/blob/main/data/cc-by-nc/alpaca_data_translated.json)
- [Ukrainian Subset from Belebele Dataset](https://github.com/facebookresearch/belebele)
- [Ukrainian Subset from XQA](https://github.com/thunlp/XQA)
- [ZNO Dataset provided in UNLP 2024 shared task](https://github.com/unlp-workshop/unlp-2024-shared-task/blob/main/data/zno.train.jsonl)

## Datasets - Unstructured
- Ukrainian Wiki

## Datasets - DPO
- Ukrainian translation of [distilabel-indel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs)
  
## 💻 Usage

```python
!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "SherlockAssistant/Mistral-7B-Instruct-Ukrainian"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```

## Citation

If you are using this model in your research and publishing a paper, please help by citing our paper:

**BIB**

```bib
@inproceedings{boros-chivereanu-dumitrescu-purcaru-2024-llm-uk,
    title = "Fine-tuning and Retrieval Augmented Generation for Question Answering using affordable Large Language Models",
    author = "Boros, Tiberiu and Chivereanu, Radu and Dumitrescu, Stefan Daniel and Purcaru, Octavian",
    booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop, LREC-COLING",
    month = may,
    year = "2024",
    address = "Torino, Italy",
    publisher = "European Language Resources Association",
}
```

**APA**

Boros, T., Chivereanu, R., Dumitrescu, S., & Purcaru, O. (2024). Fine-tuning and Retrieval Augmented Generation for Question Answering using affordable Large Language Models. In Proceedings of the Third Ukrainian Natural Language Processing Workshop, LREC-COLING. European Language Resources Association.

**MLA**

Boros, Tiberiu, Radu, Chivereanu, Stefan Daniel, Dumitrescu, Octavian, Purcaru. "Fine-tuning and Retrieval Augmented Generation for Question Answering using affordable Large Language Models." Proceedings of the Third Ukrainian Natural Language Processing Workshop, LREC-COLING. European Language Resources Association, 2024.

**Chicago**

Boros, Tiberiu, Radu, Chivereanu, Stefan Daniel, Dumitrescu, and Octavian, Purcaru. "Fine-tuning and Retrieval Augmented Generation for Question Answering using affordable Large Language Models." . In Proceedings of the Third Ukrainian Natural Language Processing Workshop, LREC-COLING. European Language Resources Association, 2024.