saishf's picture
Update README.md
3a5bec6 verified
metadata
base_model:
  - ammarali32/multi_verse_model
  - jeiku/Theory_of_Mind_Roleplay_Mistral
  - ammarali32/multi_verse_model
  - jeiku/Alpaca_NSFW_Shuffled_Mistral
  - ammarali32/multi_verse_model
  - jeiku/Theory_of_Mind_Mistral
  - ammarali32/multi_verse_model
  - jeiku/Gnosis_Reformatted_Mistral
  - ammarali32/multi_verse_model
  - ammarali32/multi_verse_model
  - jeiku/Re-Host_Limarp_Mistral
  - ammarali32/multi_verse_model
  - jeiku/Luna_LoRA_Mistral
library_name: transformers
license: cc-by-nc-4.0
tags:
  - mergekit
  - merge
language:
  - en
  • GGUF quants! image/jpeg Multi verse img!

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

  • This merge is entirely experimental, I've only tested it a few times but it seems to work? Thanks for all the loras jeiku. I keep getting driver crashes training my own :\
  • Update, It scores well! My highest scoring model so far

Merge Method

This model was merged using the task arithmetic merge method using ammarali32/multi_verse_model as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: task_arithmetic
base_model: ammarali32/multi_verse_model
parameters:
  normalize: true
models:
  - model: ammarali32/multi_verse_model+jeiku/Gnosis_Reformatted_Mistral
    parameters:
      weight: 0.7
  - model: ammarali32/multi_verse_model+jeiku/Theory_of_Mind_Roleplay_Mistral
    parameters:
      weight: 0.65
  - model: ammarali32/multi_verse_model+jeiku/Luna_LoRA_Mistral
    parameters:
      weight: 0.5
  - model: ammarali32/multi_verse_model+jeiku/Re-Host_Limarp_Mistral
    parameters:
      weight: 0.8
  - model: ammarali32/multi_verse_model+jeiku/Alpaca_NSFW_Shuffled_Mistral
    parameters:
      weight: 0.75  
  - model: ammarali32/multi_verse_model+jeiku/Theory_of_Mind_Mistral
    parameters:
      weight: 0.7                     
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 74.73
AI2 Reasoning Challenge (25-Shot) 72.35
HellaSwag (10-Shot) 88.37
MMLU (5-Shot) 63.94
TruthfulQA (0-shot) 73.19
Winogrande (5-shot) 84.14
GSM8k (5-shot) 66.41