jisx's picture
Upload README.md with huggingface_hub
3dbe5f8 verified
|
raw
history blame
1.62 kB
metadata
language:
  - bg
tags:
  - generation
  - question answering
  - instruction tuning
license: cc-by-nc-4.0

Model Description

This HF repository contains base LLMs instruction tuned (SFT) with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable.

Instruction tuning details

  • Base model: bloom-1b1
  • Instruction tuning language: Bulgarian
  • Training method: full-parameter fine-tuning.
  • Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs.
  • Dataset: machine-translated from yahma/alpaca-cleaned. You can download our data HERE.

Usage

The model checkpoint should be loaded using transformers library.

Please refer to our Github repository HERE for inference and training instructions.

Citation

@inproceedings{chen-etal-2024-monolingual,
  title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
  author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
  year="2024",
  booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}