ariel-ml commited on
Commit
71b2802
1 Parent(s): 22406a5

doc: README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -7
README.md CHANGED
@@ -1,22 +1,69 @@
1
  ---
 
2
  language:
 
3
  - en
4
- license: apache-2.0
5
  tags:
 
6
  - text-generation-inference
7
  - transformers
8
  - unsloth
9
  - llama
10
  - trl
 
11
  base_model: NYTK/PULI-LlumiX-32K
 
 
 
12
  ---
13
 
14
- # Uploaded model
15
 
16
- - **Developed by:** ariel-ml
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** NYTK/PULI-LlumiX-32K
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: llama2
3
  language:
4
+ - hu
5
  - en
 
6
  tags:
7
+ - puli
8
  - text-generation-inference
9
  - transformers
10
  - unsloth
11
  - llama
12
  - trl
13
+ - finetuned
14
  base_model: NYTK/PULI-LlumiX-32K
15
+ datasets:
16
+ - boapps/szurkemarha
17
+ pipeline_tag: text-generation
18
  ---
19
 
20
+ # PULI LlumiX 32K instruct (6.74B billion parameter)
21
 
22
+ Intruct finetuned version of NYTK/PULI-LlumiX-32K.
 
 
23
 
24
+ ## Training platform
25
+ [Runpod](https://runpod.io) RTX 4090 GPU
26
 
27
+ ## Hyper parameters
28
+
29
+ - Epoch: 3
30
+ - LoRA rank (r): 16
31
+ - LoRA alpha: 16
32
+ - Lr: 2e-4
33
+ - Lr scheduler: cosine
34
+ - Optimizer: adamw_8bit
35
+ - Weight decay: 0.01
36
+
37
+ ## Dataset
38
+
39
+ boapps/szurkemarha
40
+
41
+ Only Hungarian instructions were selected: ~53000 prompts.
42
+
43
+ ## Prompt template: ChatML
44
+ ```
45
+ <|im_start|>system
46
+ Egy segítőkész mesterséges intelligencia asszisztens vagy. Válaszold meg a kérdést legjobb tudásod szerint!<|im_end|>
47
+ <|im_start|>user
48
+ Ki a legerősebb szuperhős?<|im_end|>
49
+ <|im_start|>assistant
50
+ A legerősebb szuperhős a Marvel univerzumában Hulk.<|im_end|>
51
+ ```
52
+
53
+ ## Base model
54
+
55
+ - Trained with OpenChatKit [github](https://github.com/togethercomputer/OpenChatKit)
56
+ - The [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) model were continuously pretrained on Hungarian dataset
57
+ - The model has been extended to a context length of 32K with position interpolation
58
+ - Checkpoint: 100 000 steps
59
+
60
+ ## Base model dataset for continued pretraining
61
+
62
+ - Hungarian: 7.9 billion words, documents (763K) that exceed 5000 words in length
63
+ - English: Long Context QA (2 billion words), BookSum (78 million words)
64
+
65
+ ## Limitations
66
+
67
+ - max_seq_length = 32 768
68
+ - float16
69
+ - vocab size: 32 000