File size: 1,353 Bytes
72d8c46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b206cab
 
 
 
 
 
 
 
 
 
 
 
72d8c46
 
 
 
 
 
b206cab
72d8c46
 
b206cab
 
 
 
 
 
 
 
 
 
 
 
 
72d8c46
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- unsloth/gemma-2b-bnb-4bit
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
---

# Gemma-TinyLLama-Passthrough

Gemma-TinyLLama-Passthrough is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [unsloth/gemma-2b-bnb-4bit](https://huggingface.co/unsloth/gemma-2b-bnb-4bit)
* [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)

## 🧩 Configuration

\```yaml
# models:
#   - model: unsloth/gemma-7b-bnb-4bit
#     layer_range: [0, 32]
#     # no parameters necessary for base model
#   - model: mistralai/Mistral-7B-v0.1
#     layer_range: [24, 32]
# merge_method: passthrough
# # base_model: unsloth/gemma-7b-bnb-4bit
# parameters:
#   normalize: true
#   int8_mask: true
# dtype: float16
slices:
  - sources:
    - model: unsloth/gemma-2b-bnb-4bit
      layer_range: [0, 16]
  - sources:
    - model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
      layer_range: [0, 22]
merge_method: passthrough
dtype: bfloat16
# models:
#   - model: unsloth/gemma-2b-bnb-4bit
#     parameters:
#       density: 0.53
#       weight: 0.45
#   - model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
#     parameters:
#       weight: 0.5
# merge_method: ties
# base_model: unsloth/gemma-2b-bnb-4bit
# parameters:
#   int8_mask: true
# dtype: bfloat16
\```