Text Generation
Transformers
Safetensors
GGUF
mistral
Merge
mergekit
lazymergekit
weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp
weezywitasneezy/OxytocinErosEngineeringF2-7B-slerp
ChaoticNeutrals/Eris_Remix_7B
Virt-io/Erebus-Holodeck-7B
jeiku/Eros_Prodigadigm_7B
Epiculous/Mika-7B
Eval Results
text-generation-inference
Inference Endpoints
weezywitasneezy
commited on
Commit
•
89aac31
1
Parent(s):
16fcfc5
Update README.md
Browse files
README.md
CHANGED
@@ -145,6 +145,23 @@ OxytocinErosEngineeringFX-7B-slerp is a merge of the following models using [Laz
|
|
145 |
* [weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp](https://huggingface.co/weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp)
|
146 |
* [weezywitasneezy/OxytocinErosEngineeringF2-7B-slerp](https://huggingface.co/weezywitasneezy/OxytocinErosEngineeringF2-7B-slerp)
|
147 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
148 |
## 🧩 Configuration
|
149 |
|
150 |
```yaml
|
@@ -190,16 +207,3 @@ pipeline = transformers.pipeline(
|
|
190 |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
191 |
print(outputs[0]["generated_text"])
|
192 |
```
|
193 |
-
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
194 |
-
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_weezywitasneezy__OxytocinErosEngineeringFX-7B-slerp)
|
195 |
-
|
196 |
-
| Metric |Value|
|
197 |
-
|---------------------------------|----:|
|
198 |
-
|Avg. |70.28|
|
199 |
-
|AI2 Reasoning Challenge (25-Shot)|66.98|
|
200 |
-
|HellaSwag (10-Shot) |86.48|
|
201 |
-
|MMLU (5-Shot) |64.14|
|
202 |
-
|TruthfulQA (0-shot) |65.25|
|
203 |
-
|Winogrande (5-shot) |81.45|
|
204 |
-
|GSM8k (5-shot) |57.39|
|
205 |
-
|
|
|
145 |
* [weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp](https://huggingface.co/weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp)
|
146 |
* [weezywitasneezy/OxytocinErosEngineeringF2-7B-slerp](https://huggingface.co/weezywitasneezy/OxytocinErosEngineeringF2-7B-slerp)
|
147 |
|
148 |
+
|
149 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
150 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_weezywitasneezy__OxytocinErosEngineeringFX-7B-slerp)
|
151 |
+
|
152 |
+
| Metric |Value|
|
153 |
+
|---------------------------------|----:|
|
154 |
+
|Avg. |70.28|
|
155 |
+
|AI2 Reasoning Challenge (25-Shot)|66.98|
|
156 |
+
|HellaSwag (10-Shot) |86.48|
|
157 |
+
|MMLU (5-Shot) |64.14|
|
158 |
+
|TruthfulQA (0-shot) |65.25|
|
159 |
+
|Winogrande (5-shot) |81.45|
|
160 |
+
|GSM8k (5-shot) |57.39|
|
161 |
+
|
162 |
+
|
163 |
+
|
164 |
+
|
165 |
## 🧩 Configuration
|
166 |
|
167 |
```yaml
|
|
|
207 |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
208 |
print(outputs[0]["generated_text"])
|
209 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|