File size: 4,364 Bytes
8ccdb36 ec2ffa3 cc96941 ec2ffa3 cc96941 ec2ffa3 8ccdb36 cc96941 ec2ffa3 cc96941 ec2ffa3 cc96941 ec2ffa3 cc96941 ec2ffa3 cc96941 ec2ffa3 cc96941 8ccdb36 ec2ffa3 cc96941 8ccdb36 cc96941 ec2ffa3 cc96941 8ccdb36 cc96941 8ccdb36 cc96941 ec2ffa3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
---
license: cc-by-nc-4.0
inference: false
datasets:
- BramVanroy/alpaca-cleaned-dutch
base_model: DAMO-NLP-MT/polylm-13b
tags:
- generated_from_trainer
- alpaca
- Transformers
- PolyLM
- text-generation-inference
model-index:
- name: polylm_13b_ft_alpaca_clean_dutch
results: []
language:
- nl
library_name: peft
pipeline_tag: text-generation
---
# polylm_13b_ft_alpaca_clean_dutch
## Model description
This adapter model is a fine-tuned version of [DAMO-NLP-MT/polylm-13b](https://huggingface.co/DAMO-NLP-MT/polylm-13b).
It achieves the following results on the evaluation set:
- Loss: 1.3355
Finetuning was performed on the Dutch [BramVanroy/alpaca-cleaned-dutch](https://www.huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch) dataset which contains 52K of records with instruction following-data translated from English to Dutch.
See [DAMO-NLP-MT/polylm-13b](https://huggingface.co/DAMO-NLP-MT/polylm-13b) for all information about the base model.
## Intended uses & limitations
The PolyLM-13B model was trained on 18 languages. The primary focus was to create a multi-lingual Open LLM.
Dutch was one of those 18 languages. For training the model a diverse combination of multi-lingual datasets was used.
The generated output and performance of this model for the Dutch language is very likely not always comparable to the various Open-Llama models that have been finetuned on English Alpaca datasets.
The primary intention of this finetuned model is to explore and research the use of the Dutch language in combination with an Open LLM model.
## Bias, Risks, and Limitations
The information below is copied from the base model's [official model card](https://arxiv.org/pdf/2307.06018.pdf).
This applies also to the finetuned model.
> Our contributions are fully methodological: adding the support of multilingualism to LLM during training and SFT phases. It is unavoidable that PolyLM might exhibit several common deficiencies of language models, e.g. hallucination and toxicity. PolyLM should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
## Training and evaluation data
This model was trained on the [BramVanroy/alpaca-cleaned-dutch](https://www.huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch) dataset.
The dataset is the Dutch translation of the English Alpaca Cleaned instruction dataset.
Based on the dataset license only Non-Commercial use is allowed. Commercial use is strictly forbidden.
## Training procedure
This model was finetuned with a QLoRA setup on a Google Colab A100 GPU in about 7.0 hours.
The notebook used for training can be found here: [Training Notebook](https://github.com/RobinSmits/Dutch-LLMs/blob/main/PolyLM_13B_Alpaca_Clean_Dutch_Qlora.ipynb)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 64
- num_epochs: 2
The following bitsandbytes quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
-
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4311 | 0.16 | 128 | 1.4541 |
| 1.3936 | 0.33 | 256 | 1.4141 |
| 1.423 | 0.49 | 384 | 1.3960 |
| 1.3672 | 0.66 | 512 | 1.3832 |
| 1.3809 | 0.82 | 640 | 1.3754 |
| 1.3581 | 0.99 | 768 | 1.3652 |
| 1.3534 | 1.15 | 896 | 1.3599 |
| 1.3334 | 1.32 | 1024 | 1.3535 |
| 1.3351 | 1.48 | 1152 | 1.3475 |
| 1.3178 | 1.65 | 1280 | 1.3411 |
| 1.3341 | 1.81 | 1408 | 1.3378 |
| 1.2976 | 1.98 | 1536 | 1.3355 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
- PEFT 0.4.0 |