|
--- |
|
license: apache-2.0 |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
- roleplay |
|
- llama |
|
- llama-3.1 |
|
base_model: |
|
- NeverSleep/Lumimaid-v0.2-8B |
|
- kloodia/lora-8b-medic |
|
- nothingiisreal/L3.1-8B-Celeste-V1.5 |
|
- kloodia/lora-8b-bio |
|
- mlabonne/Hermes-3-Llama-3.1-8B-lorablated |
|
- Azazelle/RP_Format_QuoteAsterisk_Llama3 |
|
- vicgalle/Configurable-Llama-3.1-8B-Instruct |
|
- kloodia/lora-8b-physic |
|
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 |
|
model-index: |
|
- name: Gluon-8B |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: IFEval (0-Shot) |
|
type: HuggingFaceH4/ifeval |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: inst_level_strict_acc and prompt_level_strict_acc |
|
value: 50.53 |
|
name: strict accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rmdhirr/Gluon-8B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: BBH (3-Shot) |
|
type: BBH |
|
args: |
|
num_few_shot: 3 |
|
metrics: |
|
- type: acc_norm |
|
value: 30.34 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rmdhirr/Gluon-8B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MATH Lvl 5 (4-Shot) |
|
type: hendrycks/competition_math |
|
args: |
|
num_few_shot: 4 |
|
metrics: |
|
- type: exact_match |
|
value: 12.54 |
|
name: exact match |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rmdhirr/Gluon-8B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GPQA (0-shot) |
|
type: Idavidrein/gpqa |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 8.28 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rmdhirr/Gluon-8B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MuSR (0-shot) |
|
type: TAUR-Lab/MuSR |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 9.09 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rmdhirr/Gluon-8B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU-PRO (5-shot) |
|
type: TIGER-Lab/MMLU-Pro |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 31.2 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=rmdhirr/Gluon-8B |
|
name: Open LLM Leaderboard |
|
--- |
|
# ⚛️ Gluon-8B |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Quantizations |
|
### GGUF |
|
- [Static Quants](https://huggingface.co/mradermacher/Gluon-8B-GGUF) |
|
- [Imatrix Quants](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF) |
|
- [Only Q8_0](https://huggingface.co/dasChronos1/Gluon-8B-Q8_0-GGUF) |
|
|
|
## Models Merged |
|
|
|
The following models were included in the merge: |
|
* [NeverSleep/Lumimaid-v0.2-8B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B) + [kloodia/lora-8b-medic](https://huggingface.co/kloodia/lora-8b-medic) |
|
* [nothingiisreal/L3.1-8B-Celeste-V1.5](https://huggingface.co/nothingiisreal/L3.1-8B-Celeste-V1.5) + [kloodia/lora-8b-bio](https://huggingface.co/kloodia/lora-8b-bio) |
|
* [mlabonne/Hermes-3-Llama-3.1-8B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated) + [Azazelle/RP_Format_QuoteAsterisk_Llama3](https://huggingface.co/Azazelle/RP_Format_QuoteAsterisk_Llama3) |
|
* [vicgalle/Configurable-Llama-3.1-8B-Instruct](https://huggingface.co/vicgalle/Configurable-Llama-3.1-8B-Instruct) + [kloodia/lora-8b-physic](https://huggingface.co/kloodia/lora-8b-physic) |
|
|
|
## Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated+Azazelle/RP_Format_QuoteAsterisk_Llama3 |
|
- model: vicgalle/Configurable-Llama-3.1-8B-Instruct+kloodia/lora-8b-physic |
|
- model: NeverSleep/Lumimaid-v0.2-8B+kloodia/lora-8b-medic |
|
- model: nothingiisreal/L3.1-8B-Celeste-V1.5+kloodia/lora-8b-bio |
|
merge_method: model_stock |
|
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 |
|
normalize: true |
|
int8_mask: true |
|
dtype: float16 |
|
``` |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rmdhirr__Gluon-8B) |
|
|
|
| Metric |Value| |
|
|-------------------|----:| |
|
|Avg. |23.66| |
|
|IFEval (0-Shot) |50.53| |
|
|BBH (3-Shot) |30.34| |
|
|MATH Lvl 5 (4-Shot)|12.54| |
|
|GPQA (0-shot) | 8.28| |
|
|MuSR (0-shot) | 9.09| |
|
|MMLU-PRO (5-shot) |31.20| |
|
|
|
|