ssmits commited on
Commit
a0ae630
1 Parent(s): 5bc9a1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -14,7 +14,7 @@ library_name: transformers
14
 
15
  # Model Card for Zamba2-1.2B-instruct-Dutch
16
 
17
- Zamba2-1.2B-instruct-Dutch is a Dutch language instruction-following model obtained through a two-stage fine-tuning process:
18
 
19
  1. First stage (Base instruction model by Zyphra):
20
  - Zyphra fine-tuned Zamba2-1.2B to create Zamba2-1.2B-instruct through:
@@ -22,9 +22,9 @@ Zamba2-1.2B-instruct-Dutch is a Dutch language instruction-following model obtai
22
  - DPO training on [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized), [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs), and [OpenHermesPreferences](https://huggingface.co/datasets/argilla/OpenHermesPreferences)
23
 
24
  2. Second stage (Dutch language adaptation):
25
- - Further fine-tuning of Zyphra's Zamba2-1.2B-instruct on the [dolly-15k-dutch](https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch) dataset, specifically using the training split
26
 
27
- The model maintains the core hybrid architecture of Zamba2 while being optimized for Dutch language understanding and generation.
28
 
29
  ## Quick start
30
 
@@ -68,7 +68,7 @@ The model was fine-tuned using the following approach:
68
 
69
  ### Fine-tuning Configuration
70
 
71
- The model includes an advanced learning rate optimization system for fine-tuning, implemented through the custom `LROptimizerCallback` class which can be found in _lr_optimizer.py_:
72
 
73
  ```python
74
  from transformers import AutoTokenizer, Trainer
 
14
 
15
  # Model Card for Zamba2-1.2B-instruct-Dutch
16
 
17
+ Zamba2-1.2B-instruct-Dutch is a basic Dutch language instruction-following model obtained through a two-stage fine-tuning process:
18
 
19
  1. First stage (Base instruction model by Zyphra):
20
  - Zyphra fine-tuned Zamba2-1.2B to create Zamba2-1.2B-instruct through:
 
22
  - DPO training on [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized), [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs), and [OpenHermesPreferences](https://huggingface.co/datasets/argilla/OpenHermesPreferences)
23
 
24
  2. Second stage (Dutch language adaptation):
25
+ - Further fine-tuning of Zyphra's Zamba2-1.2B-instruct on the [dolly-15k-dutch](https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch) dataset, specifically using the training split. While this dataset is not state-of-the-art, it provides a solid foundation for demonstrating Dutch language capabilities and fits within the 1024 token context window. The relatively small dataset size allows for quick experimentation and validation of the model's Dutch language adaptation capabilities.
26
 
27
+ The model maintains the core hybrid architecture of Zamba2 while being optimized for Dutch language understanding and generation. By building upon Zyphra's instruction-tuned model, it inherits strong general instruction-following capabilities while adding Dutch language proficiency.
28
 
29
  ## Quick start
30
 
 
68
 
69
  ### Fine-tuning Configuration
70
 
71
+ The model includes an advanced learning rate optimization system for fine-tuning, implemented through the `LROptimizerCallback` class:
72
 
73
  ```python
74
  from transformers import AutoTokenizer, Trainer