license: apache-2.0 | |
tags: | |
- merge | |
- mergekit | |
- lazymergekit | |
- unsloth/gemma-2b-bnb-4bit | |
- TinyLlama/TinyLlama-1.1B-Chat-v1.0 | |
# Gemma-TinyLLama-Passthrough | |
Gemma-TinyLLama-Passthrough is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): | |
* [unsloth/gemma-2b-bnb-4bit](https://huggingface.co/unsloth/gemma-2b-bnb-4bit) | |
* [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) | |
## 🧩 Configuration | |
\```yaml | |
# models: | |
# - model: unsloth/gemma-7b-bnb-4bit | |
# layer_range: [0, 32] | |
# # no parameters necessary for base model | |
# - model: mistralai/Mistral-7B-v0.1 | |
# layer_range: [24, 32] | |
# merge_method: passthrough | |
# # base_model: unsloth/gemma-7b-bnb-4bit | |
# parameters: | |
# normalize: true | |
# int8_mask: true | |
# dtype: float16 | |
slices: | |
- sources: | |
- model: unsloth/gemma-2b-bnb-4bit | |
layer_range: [0, 16] | |
- sources: | |
- model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 | |
layer_range: [0, 22] | |
merge_method: passthrough | |
dtype: bfloat16 | |
# models: | |
# - model: unsloth/gemma-2b-bnb-4bit | |
# parameters: | |
# density: 0.53 | |
# weight: 0.45 | |
# - model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 | |
# parameters: | |
# weight: 0.5 | |
# merge_method: ties | |
# base_model: unsloth/gemma-2b-bnb-4bit | |
# parameters: | |
# int8_mask: true | |
# dtype: bfloat16 | |
\``` |