|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
model-index: |
|
- name: speechless-zephyr-code-functionary-7b |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 61.52 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=uukuguy/speechless-zephyr-code-functionary-7b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 83.88 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=uukuguy/speechless-zephyr-code-functionary-7b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 64.71 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=uukuguy/speechless-zephyr-code-functionary-7b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 44.99 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=uukuguy/speechless-zephyr-code-functionary-7b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 78.69 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=uukuguy/speechless-zephyr-code-functionary-7b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 43.82 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=uukuguy/speechless-zephyr-code-functionary-7b |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
<p><h1> speechless-zephyr-code-functionary-7b </h1></p> |
|
|
|
[4,5,8-bit GGUF models for CPU+GPU inference](https://huggingface.co/uukuguy/speechless-zephyr-code-functionary-7b/tree/main/GGUF) |
|
|
|
This model is the one of the moloras (Mixture-of-Multi-LoRAs) experiments. |
|
|
|
Extract LoRA modules from below models (all based Mistral-7B-v0.1), each LoRA module has its own unique skills. By using multi-loras, they can be combined together statically or dynamically to form a versatile new model. |
|
|
|
- HuggingFaceH4/zephyr-7b-beta (Uncensored Model) |
|
- meetkai/functionary-small-v2.2 (Execute functions/plugins) |
|
- uukuguy/speechless-code-mistral-7b-v1.0 (Enhance Coding) |
|
|
|
The entire process is completed through the use of extract-lora, merge-lora, and lora-hub provided by multi-loras. |
|
|
|
The router of mixture-of-multi-loras enables an automatic assembling of LoRA modules, using a gradientfree approach to obtain the coefficients of LoRA modules and requiring only a handful of inference steps for unseen tasks. |
|
|
|
Code: https://github.com/uukuguy/multi_loras |
|
|
|
## LM-Evaluation-Harness |
|
|
|
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
|
|
| Metric | Value | |
|
| --- | --- | |
|
| ARC | 61.52 | |
|
| HellaSwag | 83.88 | |
|
| MMLU | 64.71 | |
|
| TruthfulQA | 44.99 | |
|
| Winogrande | 78.69 | |
|
| GSM8K | 43.82 | |
|
| Average | 62.93 | |
|
|
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-zephyr-code-functionary-7b) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |62.93| |
|
|AI2 Reasoning Challenge (25-Shot)|61.52| |
|
|HellaSwag (10-Shot) |83.88| |
|
|MMLU (5-Shot) |64.71| |
|
|TruthfulQA (0-shot) |44.99| |
|
|Winogrande (5-shot) |78.69| |
|
|GSM8k (5-shot) |43.82| |
|
|
|
|