Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Model Card for ohno-8x7B-GGUF

ohno-8x7B quantized with love.

Upload Notes: Wanted to give this one a spin after seeing its unique merge recipe, was curious about how Mixtral-8x7B-v0.1 case-briefs affected the output.

Starting out with Q5_K_M, taking requests for any other quants. All quantizations based on original fp16 model.

Any feedback is greatly appreciated!

Original Model Card

ohno-8x7b

this... will either be my magnum opus... or terrible. no inbetweens!

Post-test verdict: It's mostly braindamaged. Might be my settings or something, idk. the ./output mentioned below is my own merge using identical recipe as Envoid/Mixtral-Instruct-ITR-8x7B.

output_merge2

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using Envoid/Mixtral-Instruct-ITR-8x7B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: ./output/+/ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b
    parameters:
      density: 0.66
      weight: 1.0
  - model: Envoid/Mixtral-Instruct-ITR-8x7B+retrieval-bar/Mixtral-8x7B-v0.1_case-briefs
    parameters:
      density: 0.1
      weight: 0.25
  - model: Envoid/Mixtral-Instruct-ITR-8x7B+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
    parameters:
      density: 0.66
      weight: 0.5
  - model: NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss
    parameters:
      density: 0.15
      weight: 0.3
merge_method: dare_ties
base_model: Envoid/Mixtral-Instruct-ITR-8x7B
dtype: float16
Downloads last month
5
GGUF
Model size
46.7B params
Architecture
llama

5-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including InferenceIllusionist/ohno-8x7B-GGUF