Edit model card

Fietje banner

Fietje 2

An open and efficient LLM for Dutch

πŸ‘±β€β™€οΈ Base version (this one) - πŸ€– Instruct version - πŸ’¬ Chat version - πŸš€ GGUF of base

Chat with Fietje here!

Fietje is an adapated version of microsoft/phi-2, tailored to Dutch text generation by training on 28B tokens. It is small and efficient with a size of 2.7 billion parameters while performing almost on par with more powerful Dutch LLMs of twice its size like GEITje 7B Ultra.

A thorough description of the creation and evaluation of Fietje as well as usage examples are available in this Github repository.

Intended uses & limitations

The same limitations as phi-2, and LLMs in general, apply here. LLMs hallucinate, make mistakes, and should not be trusted. Use at your own risk!

Training data

Fietje was continue-pretrained on 28B Dutch tokens, which includes the full Dutch component of Wikipedia (accounting for around 15%), supplemented with Dutch tokens from CulturaX. A newer version of this dataset can be found here, which also describes the filtering that took place to ensure high data quality.

Training procedure

I am thankful to the Flemish Supercomputer Center (VSC) for providing the computational power to accomplish this project. Accounting for waiting for jobs, training took around two weeks on four nodes of 4x A100 80GB each (16 total).

Training was done with the wonderful alignment-handbook, using DeepSpeed as a back-end. Exact training recipes and SLURM script are given in the Github repository.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 9e-05
  • train_batch_size: 40
  • eval_batch_size: 40
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 16
  • gradient_accumulation_steps: 3
  • total_train_batch_size: 1920
  • total_eval_batch_size: 640
  • optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07
  • lr_scheduler_type: linear
  • num_epochs: 1.0

Training results

Training Loss Epoch Step Validation Loss
1.6334 0.13 900 1.5937
1.5469 0.26 1800 1.5051
1.4937 0.4 2700 1.4628
1.4633 0.53 3600 1.4375
1.4485 0.66 4500 1.4203
1.4374 0.79 5400 1.4085
1.4278 0.92 6300 1.4013

Framework versions

  • Transformers 4.39.1
  • Pytorch 2.1.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2

Open LLM Leaderboard Evaluation Results

Results for the English Open LLM Leaderboard. For results specific to Dutch, check out ScandEval.

Detailed results can be found here

Metric Value
Avg. 9.03
IFEval (0-Shot) 20.98
BBH (3-Shot) 15.60
MATH Lvl 5 (4-Shot) 0.91
GPQA (0-shot) 0.56
MuSR (0-shot) 5.16
MMLU-PRO (5-shot) 10.95
Downloads last month
705
Safetensors
Model size
2.78B params
Tensor type
BF16
Β·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for BramVanroy/fietje-2

Base model

microsoft/phi-2
Finetuned
(286)
this model
Finetunes
8 models
Quantizations
3 models

Datasets used to train BramVanroy/fietje-2

Space using BramVanroy/fietje-2 1

Collection including BramVanroy/fietje-2