Edit model card

Buy me a Ko-FiSupport my work using Patreon

OpenChat-3.5-0106_10.7B_48Layers-Interleaved

This is NOT your usual frankenmerge created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method, but employing the Block Expansion method described in the paper LLaMA Pro: Progressive LLaMA with Block Expansion.

The authors of the paper added new layers interleaved in between the original layers of the model, setting the parameters of the o_proj and down_proj layers to zero. This effectively adds layers that will just output their input (as if they were "transparent") allowing the model to remain functional even without further training. These new layers can then be targeted during training or fine-tuning without risking catastrophic forgetting, if you follow the author's training method to freeze the original layers and only train the new layers.

This model has not yet received additional training, so it should perform close to the original model.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [0, 2]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [1, 2]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [2, 4]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [3, 4]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [4, 6]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [5, 6]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [6, 8]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [7, 8]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [8, 10]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [9, 10]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [10, 12]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [11, 12]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [12, 14]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [13, 14]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [14, 16]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [15, 16]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [16, 18]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [17, 18]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [18, 20]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [19, 20]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [20, 22]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [21, 22]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [22, 24]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [23, 24]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [24, 26]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [25, 26]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [26, 28]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [27, 28]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [28, 30]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [29, 30]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [30, 32]
  - sources:
    - model: openchat/openchat-3.5-0106
      layer_range: [31, 32]
      parameters:
        scale:
          - filter: o_proj
            value: 0.0
          - filter: down_proj
            value: 0.0
          - value: 1.0
merge_method: passthrough
dtype: bfloat16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 22.51
IFEval (0-Shot) 59.61
BBH (3-Shot) 24.06
MATH Lvl 5 (4-Shot) 6.80
GPQA (0-shot) 7.27
MuSR (0-shot) 11.78
MMLU-PRO (5-shot) 25.54

Citation

@misc{wu2024llamaproprogressivellama,
      title={LLaMA Pro: Progressive LLaMA with Block Expansion}, 
      author={Chengyue Wu and Yukang Gan and Yixiao Ge and Zeyu Lu and Jiahao Wang and Ye Feng and Ying Shan and Ping Luo},
      year={2024},
      eprint={2401.02415},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2401.02415}, 
}
Downloads last month
29
Safetensors
Model size
10.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Pretergeek/OpenChat-3.5-0106_10.7B_48Layers-Interleaved

Finetuned
(31)
this model

Collection including Pretergeek/OpenChat-3.5-0106_10.7B_48Layers-Interleaved

Evaluation results