dvres commited on
Commit
8234eed
1 Parent(s): df575d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -8,26 +8,26 @@ pipeline_tag: text-generation
8
 
9
  # Model Card for OPT_GaMS-1B-Chat
10
 
11
- We proudly present the familly of GaMS (Generative Model for Slovene) models. The 1B version is based on [Facebook's OPT model](https://huggingface.co/facebook/opt-1.3b) and is adapted for Slovene. OPT_GaMS models use original OPT tokenizer. This is the instruction-tuned version of the model.
12
 
13
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/652d40a78fa1fbb0aae165bb/YnvcP1x7CHH-eTY2oB-69.png)
14
 
15
  ## Acknowledgment
16
 
17
- The model was developed within the [PoVeJMo](https://povejmo.si) research program (Adaptive Natural Language Processing with Large Language Models}; Prilagodljiva obdelava naravnega jezika s pomočjo velikih jezikovnih modelov), particularly within the research project titled SloLLaMai -- Open-access computationally efficient models for Slovenian, funded within the Recovery and Resilience Plan (NOO; Načrt za okrevanje in odpornost) by the Slovenian Research and Innovation Agency (ARIS) and NextGenerationEU. The authors also acknowledge the financial support from the Slovenian Research and Innovation Agency (research core funding No. P6-0411 -- Language Resources and Technologies for Slovene).
18
 
19
- We thank everyone, who worked on data collection and preparation, enabling us to train our model. Special thanks goes to: Nikola Ljubešić, Tjaša Arčon, Jaka Čibej, Simon Krek, Tomaž Erjavec and Iztok Kosem.
20
 
21
  ## Basic information
22
 
23
- - **Developed by:** team of researchers at University of Ljubljana, Faculty for Computer and Information Science and XLAB.doo. Team members: Domen Vreš, Martin Božič, Aljaž Potočnik, Tomaž Martinčič, Iztok Lebar Bajec, Timotej Petrič and Marko Robnik-Šikonja.
24
  - **Language:** Slovene
25
  - **License:** Apache 2.0
26
  - **Repository:** https://github.com/SloLama/NeMo
27
  - **Paper:** https://www.sdjt.si/wp/wp-content/uploads/2024/09/JT-DH-2024_Vres_Bozic_Potocnik_Martincic_Robnik.pdf
28
 
29
  ## Intended usage
30
- This version of the model is quite small and lacks safety tuning. Hence, using it as a general purpose model is **STRONGLY DISCOURAGED!!!** The model might also contain certain biases. We do not recommend usage of this model in any other language than Slovene.
31
 
32
  The model can be efficiently tuned for specific use cases as suggested by promising results of fine-tuned models on SuperGLUE and SI-NLI benchmarks
33
 
@@ -65,7 +65,7 @@ print("Model's response:", response[0]["generated_text"][-1]["content"])
65
  The model was additionally pretrained on the following Slovene, English, and Croatian-Bosnian-Serbian (CBS) corpora:
66
  | Corpus | Language | # Tokens | Percentage |
67
  | :----- | :------- | :------: | :--------: |
68
- | Metafida | Slovene | 6.59 B | 13.89 % |
69
  | KAS | Slovene | 3.61 B | 7.62 % |
70
  | Trendi | Slovene | 1.4 B | 2.96 % |
71
  | mC4 | Slovene | 5.5 B | 11.6 % |
@@ -81,17 +81,17 @@ The total size of additional training data is **47.44 B** tokens.
81
 
82
  ### Training Procedure
83
 
84
- The model was trained using NeMo framework on Slovene HPC Vega, utilizing 64 A100 GPUs at once. Training took approximately 16 hours. The model was trained with batch size 1024 (2 million tokens) using Adam optimizer and cosine learning rate scheduler with 1000 warmup and constant steps.
85
 
86
  ### Supervised Finetuning (SFT)
87
 
88
- The model was trained on [GaMS-Instruct](http://hdl.handle.net/11356/1971) dataset (20.000 examples). The currated version of the dataset (7.000 examples) is publicly available. 19.050 examples were used as a training set and 950 examples were used as a validation set.
89
 
90
  The model was LoRA tuned on 7 epochs with rank 1024. The model was trained with batch size 64 using Adam optimizer and cosine learning rate scheduler with 300 warmup steps.
91
 
92
  ## Evaluation
93
 
94
- The model was evaluated using [Slovene SuperGLUE](https://slobench.cjvt.si/leaderboard/view/3) and [SI-NLI](https://slobench.cjvt.si/leaderboard/view/9) tasks on [SloBench](https://slobench.cjvt.si). Additionally, the models was evaluated on imporved version of Slovenian-LLM-eval introduced by Aleksa Gordić. All decoder-type models were evaluated using few-shot prompts and were not finetuned on the benchmark (except for the versions with finetuned in the name).
95
 
96
  ### SuperGLUE results
97
 
 
8
 
9
  # Model Card for OPT_GaMS-1B-Chat
10
 
11
+ We proudly present the family of GaMS (Generative Model for Slovene) models. The 1B version is based on [Facebook's OPT model](https://huggingface.co/facebook/opt-1.3b) and is adapted for Slovene. OPT_GaMS models use the original OPT tokenizer. This is the instruction-tuned version of the model.
12
 
13
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/652d40a78fa1fbb0aae165bb/YnvcP1x7CHH-eTY2oB-69.png)
14
 
15
  ## Acknowledgment
16
 
17
+ The model was developed within the [PoVeJMo](https://www.cjvt.si/povejmo/en/project/) research program (Adaptive Natural Language Processing with Large Language Models), particularly within the research project titled SloLLaMai -- Open-access computationally efficient models for Slovenian. The program is funded within the Recovery and Resilience Plan by the Slovenian Research and Innovation Agency (ARIS) and NextGenerationEU. The authors also acknowledge the financial support from the Slovenian Research and Innovation Agency (research core funding No. P6-0411 -- Language Resources and Technologies for Slovene).
18
 
19
+ We thank everyone who worked on data collection and preparation, enabling us to train our model. Special thanks go to Nikola Ljubešić, Tjaša Arčon, Jaka Čibej, Simon Krek, Tomaž Erjavec and Iztok Kosem.
20
 
21
  ## Basic information
22
 
23
+ - **Developed by:** team of researchers at the University of Ljubljana, Faculty for Computer and Information Science and XLAB.doo. Team members: Domen Vreš, Martin Božič, Aljaž Potočnik, Tomaž Martinčič, Iztok Lebar Bajec, Timotej Petrič and Marko Robnik-Šikonja.
24
  - **Language:** Slovene
25
  - **License:** Apache 2.0
26
  - **Repository:** https://github.com/SloLama/NeMo
27
  - **Paper:** https://www.sdjt.si/wp/wp-content/uploads/2024/09/JT-DH-2024_Vres_Bozic_Potocnik_Martincic_Robnik.pdf
28
 
29
  ## Intended usage
30
+ This version of the model is quite small and lacks safety tuning. Hence, using it as a general-purpose model is **STRONGLY DISCOURAGED!** The model might also contain certain biases. We do not recommend the usage of this model in any other language than Slovene.
31
 
32
  The model can be efficiently tuned for specific use cases as suggested by promising results of fine-tuned models on SuperGLUE and SI-NLI benchmarks
33
 
 
65
  The model was additionally pretrained on the following Slovene, English, and Croatian-Bosnian-Serbian (CBS) corpora:
66
  | Corpus | Language | # Tokens | Percentage |
67
  | :----- | :------- | :------: | :--------: |
68
+ | MetaFida | Slovene | 6.59 B | 13.89 % |
69
  | KAS | Slovene | 3.61 B | 7.62 % |
70
  | Trendi | Slovene | 1.4 B | 2.96 % |
71
  | mC4 | Slovene | 5.5 B | 11.6 % |
 
81
 
82
  ### Training Procedure
83
 
84
+ The model was trained using the NeMo framework on Slovene HPC Vega, utilizing 64 A100 GPUs simultaneously. Training took approximately 16 hours. The model was trained with batch size 1024 (2 million tokens) using Adam optimizer and cosine learning rate scheduler with 1000 warmup and constant steps.
85
 
86
  ### Supervised Finetuning (SFT)
87
 
88
+ The model was trained on [GaMS-Instruct](http://hdl.handle.net/11356/1971) dataset (20.000 examples). The curated version of the dataset (7.000 examples) is publicly available. 19.050 examples were used as a training set and 950 examples were used as a validation set.
89
 
90
  The model was LoRA tuned on 7 epochs with rank 1024. The model was trained with batch size 64 using Adam optimizer and cosine learning rate scheduler with 300 warmup steps.
91
 
92
  ## Evaluation
93
 
94
+ The model was evaluated using [Slovene SuperGLUE](https://slobench.cjvt.si/leaderboard/view/3) and [SI-NLI](https://slobench.cjvt.si/leaderboard/view/9) tasks on [SloBench](https://slobench.cjvt.si). Additionally, the models were evaluated on an improved version of the Slovenian-LLM-eval introduced by Aleksa Gordić. All decoder-type models were evaluated using few-shot prompts and were not finetuned on the benchmark (except for the versions with finetuned in the name).
95
 
96
  ### SuperGLUE results
97