Edit model card

final_merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using ./storage3/input_models/Mistral-7B-v0.1_8133861 as a base.

Models Merged

The following models were included in the merge:

  • ./storage3/input_models/WizardMath-7B-V1.1_2027605156
  • ./storage3/input_models/shisa-gamma-7b-v1_4025154171
  • ./storage3/input_models/Abel-7B-002_121690448

Configuration

The following YAML configuration was used to produce this model:

base_model: ./storage3/input_models/Mistral-7B-v0.1_8133861
dtype: bfloat16
merge_method: dare_ties
parameters:
  int8_mask: 1.0
  normalize: 1.0
slices:
- sources:
  - layer_range: [0, 32]
    model: ./storage3/input_models/shisa-gamma-7b-v1_4025154171
    parameters:
      density: 1.0
      weight: -0.0378726672672588
  - layer_range: [0, 32]
    model: ./storage3/input_models/WizardMath-7B-V1.1_2027605156
    parameters:
      density: 0.7433311818361178
      weight: 1.5192904356611323
  - layer_range: [0, 32]
    model: ./storage3/input_models/Abel-7B-002_121690448
    parameters:
      density: 0.47833652897680473
      weight: 1.0403117323704718
  - layer_range: [0, 32]
    model: ./storage3/input_models/Mistral-7B-v0.1_8133861
Downloads last month
15
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.