--- base_model: - bunnycore/LLama-3.1-4B-TitanFusion library_name: transformers quantized_by: Ex_y base_model_relation: quantized tags: - mergekit - merge - exl2 license: other --- EXL2 quants of [bunnycore/LLama-3.1-4B-TitanFusion](https://huggingface.co/bunnycore/LLama-3.1-4B-TitanFusion) Default parameter. 6.5bpw and 8.0 bpw uses 8 bit lm_head layer, while 4.25bpw and 5.0bpw uses 6 bit lm_head layer. Note: The tree is 6.0bpw but it is actually 6.5bpw. I fumbled it up when uploading the quants lmao # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1) as a base. ### Models Merged The following models were included in the merge: * [anthracite-org/magnum-v2-4b](https://huggingface.co/anthracite-org/magnum-v2-4b) * [TheDrummer/Hubble-4B-v1](https://huggingface.co/TheDrummer/Hubble-4B-v1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Magpie-Align/MagpieLM-4B-Chat-v0.1 - model: anthracite-org/magnum-v2-4b - model: TheDrummer/Hubble-4B-v1 merge_method: model_stock base_model: Magpie-Align/MagpieLM-4B-Chat-v0.1 dtype: bfloat16 ```