fietje-2-instruct / README.md
BramVanroy's picture
Update README.md
3d3a57f verified
|
raw
history blame
4.69 kB
metadata
license: mit
base_model: BramVanroy/fietje-2b
tags:
  - trl
  - fietje
  - alignment-handbook
  - sft
datasets:
  - BramVanroy/ultrachat_200k_dutch
  - BramVanroy/no_robots_dutch
  - BramVanroy/belebele_dutch
model-index:
  - name: fietje-2b-instruct
    results: []
pipeline_tag: text-generation
inference: false
language:
  - nl

Fietje banner

Fietje 2B Instruct

An open and efficient LLM for Dutch

πŸ‘±β€β™€οΈ Base version - πŸ€– Instruct version (this one) - πŸ’¬ Chat version - πŸš€ GGUF of Instruct

Chat with Fietje here!

This is the instruct version of Fietje, an SFT-tuned (instruction-tuned) variant of the base model. Fietje is an adapated version of microsoft/phi-2, tailored to Dutch text generation by training on 28B tokens. It is small and efficient with a size of 2.7 billion parameters while performing almost on par with more powerful Dutch LLMs of twice its size like GEITje 7B Ultra.

A thorough description of the creation and evaluation of Fietje as well as usage examples are available in this Github repository.

Intended uses & limitations

The same limitations as phi-2, and LLMs in general, apply here. LLMs hallucinate, make mistakes, and should not be trusted. Use at your own risk!

Training and evaluation data

Fietje 2B instruct was finetuned from the base model on the following datasets. Number of training samples per dataset given in brackets, totalling 201,579 samples.

Training procedure

I am thankful to the Flemish Supercomputer Center (VSC) for providing the computational power to accomplish this project. Accounting for waiting for jobs, training took around a day on four nodes of 4x A100 80GB each (16 total). I cannot find the exact time anymore and I do not think that the runtime in all_results.json accounts for interrupted-and-continued jobs.

Training was done with the wonderful alignment-handbook, using DeepSpeed as a back-end. Exact training recipes and SLURM script are given in the Github repository.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-05
  • train_batch_size: 42
  • eval_batch_size: 42
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 16
  • total_train_batch_size: 672
  • total_eval_batch_size: 672
  • optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss
0.9325 1.0 178 0.9060
0.8687 2.0 356 0.8850
0.8385 3.0 534 0.8818

Framework versions

  • Transformers 4.39.1
  • Pytorch 2.1.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2