|
--- |
|
license: cc-by-nc-4.0 |
|
--- |
|
|
|
# Model Card for Spivavtor-Large |
|
|
|
This model was obtained by fine-tuning the corresponding `bigscience/mt0-large` model on the Spivavtor dataset. All details of the dataset and fine tuning process can be found in our paper and repository. |
|
|
|
**Paper:** Spivavtor: An Instruction Tuned Ukrainian Text Editing Model |
|
|
|
**Authors:** Aman Saini, Artem Chernodub, Vipul Raheja, Vivek Kulkarni |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
- **Language**: Ukrainian |
|
- **Finetuned from model:** bigscience/mt0-large |
|
|
|
## How to use |
|
We make available the following models presented in our paper. |
|
|
|
<table> |
|
<tr> |
|
<th>Model</th> |
|
<th>Number of parameters</th> |
|
<th>Reference name in Paper</th> |
|
</tr> |
|
<tr> |
|
<td>Spivavtor-large</td> |
|
<td>1.2B</td> |
|
<td>Spivavtor-mt0-large</td> |
|
</tr> |
|
<tr> |
|
<td>Spivavtor-xxl</td> |
|
<td>11B</td> |
|
<td>Spivavtor-aya-101</td> |
|
</tr> |
|
</table> |
|
|
|
## Usage |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
|
tokenizer = AutoTokenizer.from_pretrained("grammarly/spivavtor-large") |
|
model = AutoModelForSeq2SeqLM.from_pretrained("grammarly/spivavtor-large") |
|
|
|
input_text = 'Виправте граматику в цьому реченнi: Дякую за iнформацiю! ми з Надiєю саме вийшли з дому' |
|
|
|
input_ids = tokenizer(input_text, return_tensors="pt").input_ids |
|
outputs = model.generate(input_ids, max_length=256) |
|
output_text = tokenizer.decode(outputs[0], skip_special_tokens=True) |
|
``` |