|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
--- |
|
|
|
# Model card for Mistral-Instruct-Ukrainian-SFT |
|
|
|
Supervised finetuning of Mistral-7B-Instruct-v0.2 on Ukrainian datasets. |
|
|
|
|
|
## Instruction format |
|
|
|
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. |
|
|
|
E.g. |
|
``` |
|
text = "[INST]Відповідайте лише буквою правильної відповіді: Елементи експресіонізму наявні у творі: A. «Камінний хрест», B. «Інститутка», C. «Маруся», D. «Людина»[/INST]" |
|
``` |
|
|
|
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: |
|
|
|
## Model Architecture |
|
This instruction model is based on Mistral-7B-v0.2, a transformer model with the following architecture choices: |
|
- Grouped-Query Attention |
|
- Sliding-Window Attention |
|
- Byte-fallback BPE tokenizer |
|
|
|
## Datasets |
|
- [UA-SQUAD](https://huggingface.co/datasets/FIdo-AI/ua-squad/resolve/main/ua_squad_dataset.json) |
|
- [Ukrainian StackExchange](https://huggingface.co/datasets/zeusfsx/ukrainian-stackexchange) |
|
- [UAlpaca Dataset](https://github.com/robinhad/kruk/blob/main/data/cc-by-nc/alpaca_data_translated.json) |
|
- [Ukrainian Subset from Belebele Dataset](https://github.com/facebookresearch/belebele) |
|
- [Ukrainian Subset from XQA](https://github.com/thunlp/XQA) |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "Radu1999/Mistral-Instruct-Ukrainian-SFT" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.bfloat16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
|
|
## Author |
|
|
|
Radu Chivereanu |