khulaifi95 commited on
Commit
ece79fe
·
verified ·
1 Parent(s): ac48381

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -5
README.md CHANGED
@@ -15,12 +15,64 @@ datasets:
15
  - O1-OPEN/OpenO1-SFT
16
  ---
17
 
18
- This model trains on the base llama model with several open-source datasets.
 
 
19
 
20
- WIP
21
 
22
- ## Evaluation
23
 
24
- ## Training
 
25
 
26
- ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  - O1-OPEN/OpenO1-SFT
16
  ---
17
 
18
+ > [!TIP]
19
+ > This is an experimental model, so it might not perform well for some prompts and may be sensitive to hyper parameters.
20
+ > It is mainly trained to enhance reasoning capabilities.
21
 
22
+ # khulaifi95/Llama-3.1-8B-Reason-Blend-888k
23
 
 
24
 
25
+ # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
26
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_khulaifi95__Llama-3.1-8B-Reason-Blend-888k)
27
 
28
+ | Metric |Value|
29
+ |-------------------|----:|
30
+ |Avg. | |
31
+ |IFEval (0-Shot) | |
32
+ |BBH (3-Shot) | |
33
+ |MATH Lvl 5 (4-Shot)| |
34
+ |GPQA (0-shot) | |
35
+ |MuSR (0-shot) | |
36
+ |MMLU-PRO (5-shot) | |
37
+
38
+ # Prompt Template
39
+
40
+ This model uses `ChatML` prompt template:
41
+
42
+ ```sh
43
+ <|im_start|>system
44
+ {System}
45
+ <|im_end|>
46
+ <|im_start|>user
47
+ {User}
48
+ <|im_end|>
49
+ <|im_start|>assistant
50
+ {Assistant}
51
+ ````
52
+
53
+ # How to use
54
+
55
+ ```python
56
+
57
+ # Use a pipeline as a high-level helper
58
+
59
+ from transformers import pipeline
60
+
61
+ messages = [
62
+ {"role": "user", "content": "Who are you?"},
63
+ ]
64
+ pipe = pipeline("text-generation", model="khulaifi95/Llama-3.1-8B-Reason-Blend-888k")
65
+ pipe(messages)
66
+
67
+
68
+ # Load model directly
69
+
70
+ from transformers import AutoTokenizer, AutoModelForCausalLM
71
+
72
+ tokenizer = AutoTokenizer.from_pretrained("khulaifi95/Llama-3.1-8B-Reason-Blend-888k")
73
+ model = AutoModelForCausalLM.from_pretrained("khulaifi95/Llama-3.1-8B-Reason-Blend-888k")
74
+ ```
75
+
76
+ # Ethical Considerations
77
+
78
+ As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.