GGUF
English
llama-cpp
gguf-my-repo
Eval Results
Stark2008 commited on
Commit
d13dee9
1 Parent(s): 30fa072

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +160 -0
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Ba2han/Llama-Phi-3_DoRA
3
+ datasets:
4
+ - Sao10K/Claude-3-Opus-Instruct-15K
5
+ - abacusai/SystemChat-1.1
6
+ - Ba2han/DollyLlama-5k
7
+ language:
8
+ - en
9
+ license: mit
10
+ tags:
11
+ - llama-cpp
12
+ - gguf-my-repo
13
+ model-index:
14
+ - name: Llama-Phi-3_DoRA
15
+ results:
16
+ - task:
17
+ type: text-generation
18
+ name: Text Generation
19
+ dataset:
20
+ name: AI2 Reasoning Challenge (25-Shot)
21
+ type: ai2_arc
22
+ config: ARC-Challenge
23
+ split: test
24
+ args:
25
+ num_few_shot: 25
26
+ metrics:
27
+ - type: acc_norm
28
+ value: 62.29
29
+ name: normalized accuracy
30
+ source:
31
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/Llama-Phi-3_DoRA
32
+ name: Open LLM Leaderboard
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: HellaSwag (10-Shot)
38
+ type: hellaswag
39
+ split: validation
40
+ args:
41
+ num_few_shot: 10
42
+ metrics:
43
+ - type: acc_norm
44
+ value: 79.08
45
+ name: normalized accuracy
46
+ source:
47
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/Llama-Phi-3_DoRA
48
+ name: Open LLM Leaderboard
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: MMLU (5-Shot)
54
+ type: cais/mmlu
55
+ config: all
56
+ split: test
57
+ args:
58
+ num_few_shot: 5
59
+ metrics:
60
+ - type: acc
61
+ value: 69.44
62
+ name: accuracy
63
+ source:
64
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/Llama-Phi-3_DoRA
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: TruthfulQA (0-shot)
71
+ type: truthful_qa
72
+ config: multiple_choice
73
+ split: validation
74
+ args:
75
+ num_few_shot: 0
76
+ metrics:
77
+ - type: mc2
78
+ value: 54.08
79
+ source:
80
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/Llama-Phi-3_DoRA
81
+ name: Open LLM Leaderboard
82
+ - task:
83
+ type: text-generation
84
+ name: Text Generation
85
+ dataset:
86
+ name: Winogrande (5-shot)
87
+ type: winogrande
88
+ config: winogrande_xl
89
+ split: validation
90
+ args:
91
+ num_few_shot: 5
92
+ metrics:
93
+ - type: acc
94
+ value: 73.4
95
+ name: accuracy
96
+ source:
97
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/Llama-Phi-3_DoRA
98
+ name: Open LLM Leaderboard
99
+ - task:
100
+ type: text-generation
101
+ name: Text Generation
102
+ dataset:
103
+ name: GSM8k (5-shot)
104
+ type: gsm8k
105
+ config: main
106
+ split: test
107
+ args:
108
+ num_few_shot: 5
109
+ metrics:
110
+ - type: acc
111
+ value: 68.01
112
+ name: accuracy
113
+ source:
114
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/Llama-Phi-3_DoRA
115
+ name: Open LLM Leaderboard
116
+ ---
117
+
118
+ # Stark2008/Llama-Phi-3_DoRA-Q8_0-GGUF
119
+ This model was converted to GGUF format from [`Ba2han/Llama-Phi-3_DoRA`](https://huggingface.co/Ba2han/Llama-Phi-3_DoRA) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
120
+ Refer to the [original model card](https://huggingface.co/Ba2han/Llama-Phi-3_DoRA) for more details on the model.
121
+
122
+ ## Use with llama.cpp
123
+ Install llama.cpp through brew (works on Mac and Linux)
124
+
125
+ ```bash
126
+ brew install llama.cpp
127
+
128
+ ```
129
+ Invoke the llama.cpp server or the CLI.
130
+
131
+ ### CLI:
132
+ ```bash
133
+ llama-cli --hf-repo Stark2008/Llama-Phi-3_DoRA-Q8_0-GGUF --hf-file llama-phi-3_dora-q8_0.gguf -p "The meaning to life and the universe is"
134
+ ```
135
+
136
+ ### Server:
137
+ ```bash
138
+ llama-server --hf-repo Stark2008/Llama-Phi-3_DoRA-Q8_0-GGUF --hf-file llama-phi-3_dora-q8_0.gguf -c 2048
139
+ ```
140
+
141
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
142
+
143
+ Step 1: Clone llama.cpp from GitHub.
144
+ ```
145
+ git clone https://github.com/ggerganov/llama.cpp
146
+ ```
147
+
148
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
149
+ ```
150
+ cd llama.cpp && LLAMA_CURL=1 make
151
+ ```
152
+
153
+ Step 3: Run inference through the main binary.
154
+ ```
155
+ ./llama-cli --hf-repo Stark2008/Llama-Phi-3_DoRA-Q8_0-GGUF --hf-file llama-phi-3_dora-q8_0.gguf -p "The meaning to life and the universe is"
156
+ ```
157
+ or
158
+ ```
159
+ ./llama-server --hf-repo Stark2008/Llama-Phi-3_DoRA-Q8_0-GGUF --hf-file llama-phi-3_dora-q8_0.gguf -c 2048
160
+ ```